Papers
Topics
Authors
Recent
2000 character limit reached

ParticleNet: Graph NN for Jet Tagging

Updated 3 December 2025
  • ParticleNet is a graph-based neural network that represents jets as unordered particle clouds with permutation symmetry.
  • It employs dynamic kNN graph construction and EdgeConv layers to capture local and hierarchical event structures.
  • Benchmark studies demonstrate that ParticleNet outperforms traditional models in jet flavor and substructure tagging tasks.

ParticleNet is a graph-based neural network architecture tailored for jet tagging in high-energy physics. It models a jet as a "particle cloud"—an unordered set of constituent particles—and employs dynamic edge convolution (EdgeConv) layers to capture local and hierarchical event structure. Permutation symmetry, adaptive neighborhood selection via dynamic k-nearest-neighbor graphs, and rigorously defined per-particle input features are central to its design. ParticleNet has established state-of-the-art performance in benchmark jet identification and flavor tagging tasks, notably outperforming DeepSets-based approaches and traditional boosted decision tree (BDT) algorithms in both simulated and experimental studies (Qu et al., 2019, Li et al., 2021, Liao et al., 2022, Zhu et al., 2023, Mokhtar et al., 2022, Dong et al., 1 Jul 2024, Shimmin, 2021).

1. Particle Cloud Representation and Permutation Symmetry

ParticleNet formulates jets as unordered sets of constituent particles, each described by a feature vector. This representation, termed "particle cloud," encodes kinematic, geometric, and detector-level properties such as:

  • Four-momentum components (px,py,pz,E)(p_x, p_y, p_z, E), or derived variables (Δη,Δϕ,logpT,logE)(\Delta \eta, \Delta \phi, \log p_T, \log E) optimized for collider context.
  • Particle identification (PID) flags (isElectron, isPhoton, isChargedPion, etc.).
  • Charge and impact parameter observables (d0,z0,σd0,σz0)(d_0, z_0, \sigma_{d_0}, \sigma_{z_0}).
  • Additional correlation features such as energy sums, angles centered by event-level axes, or normalized energies Ei/jEjE_i/\sum_j E_j (Qu et al., 2019, Li et al., 2021, Liao et al., 2022, Zhu et al., 2023, Dong et al., 1 Jul 2024).

Permutation symmetry is intrinsic—particle ordering is arbitrary, and network outputs are invariant to input permutations. This set-equivariance is achieved by weight sharing and symmetric aggregation in EdgeConv layers, preventing spurious correlations and reducing parameter count (Qu et al., 2019).

2. Dynamic Graph Construction and EdgeConv Layers

A distinguishing methodological feature of ParticleNet is dynamic graph construction. At each EdgeConv block:

  • A graph is formed by connecting each particle ii to its kk nearest neighbors jNk(i)j \in \mathcal{N}_k(i) based on a chosen metric (initially (Δη,Δϕ)(\Delta \eta, \Delta \phi)-space; subsequently, learned feature space).
  • For each edge (i,j)(i,j), an edge-wise feature is computed via a multilayer perceptron (MLP) applied to (xi,xjxi)(x_i, x_j-x_i).
  • Aggregation over neighbor edges employs a channel-wise maximum or mean, respecting permutation invariance.

Formally, for feature vector xi()x_i^{(\ell)} at layer \ell, the update is: xi(+1)=ρjNk(i)[hΘ(xi(),xj()xi())]x_i^{(\ell+1)} = \rho_{j \in \mathcal{N}_k(i)} [ h_\Theta( x_i^{(\ell)}, x_j^{(\ell)} - x_i^{(\ell)} ) ] where hΘh_\Theta denotes the shared MLP, ρ\rho is the aggregation function (max or mean), and graph adjacency is recomputed dynamically (Qu et al., 2019, Mokhtar et al., 2022).

Architecturally, ParticleNet adopts three (sometimes four) stacked EdgeConv blocks with increasing channel widths, e.g., (64, 64, 64), (128, 128, 128), (256, 256, 256). Channel-wise global pooling converts particle-level representations to event- or jet-level descriptors, further processed by fully connected layers and a final softmax or sigmoid classifier head (Li et al., 2021, Liao et al., 2022, Zhu et al., 2023, Dong et al., 1 Jul 2024).

3. Training Procedures and Hyperparameter Choices

Training regimens conform to deep learning standards but are adapted for physics-specific constraints:

  • Supervised optimization using categorical or binary cross-entropy loss, depending on classification setup.
  • Adam or AdamW optimizer, typical learning rates in 10310^{-3} to 10410^{-4} range.
  • Batch sizes between 128 and 1,024, dataset splits (e.g., 60–80% training, remainder for validation/testing).
  • Event samples range from 10510^5 up to 1.6×1071.6 \times 10^7 events per paper, with explicit event-level normalization and feature centering (Li et al., 2021, Liao et al., 2022, Zhu et al., 2023, Mokhtar et al., 2022, Dong et al., 1 Jul 2024).
  • Dropout (commonly p=0.1p=0.1) and optional weight decay are used for regularization.
  • One-cycle or plateau-based learning rate schedules; early stopping based on validation metrics.
  • Preprocessing includes normalization of energies, angular centering relative to event axes, and one-hot encoding of categorical features.

Hyperparameters such as kk (number of neighbors), EdgeConv block width, and number of blocks vary by application but typically k=12k=12–16 and 3–4 blocks (Qu et al., 2019, Mokhtar et al., 2022, Li et al., 2021).

4. Benchmark Performance and Physics Implications

ParticleNet consistently achieves benchmark-leading results in jet flavor and substructure tagging:

  • On top-vs-QCD jet tagging (pT[550,650]p_T\in [550,650] GeV), ParticleNet reached \approx94.0% accuracy, AUC=0.9858, and background rejection 1/ϵb=3971/\epsilon_b=397 at ϵs=50%\epsilon_s=50\% (Qu et al., 2019, Shimmin, 2021).
  • For quark/gluon tagging, AUC=0.9116 and 1/ϵb=39.81/\epsilon_b=39.8 at ϵs=50%\epsilon_s=50\% with PID inputs (Shimmin, 2021).
  • In Higgs decay classification at e+ee^+e^- colliders, ParticleNet extended to 39-class tasks with per-class accuracy >90%>90\% for leptonic decays, hadronic accuracy range 75–90%, and strong discrimination between signal and background channels (Li et al., 2021).
  • In CEPC studies, ParticleNet improved c-tagging purity and efficiency by >>50% versus LCFIPlus, reducing statistical uncertainty in RcR_c by 40% and enabling sub-10510^{-5} precision in Zbbˉ,ccˉZ\to b\bar{b},\,c\bar{c} partial widths (Liao et al., 2022, Zhu et al., 2023).
  • In top quark polarimetry, adaptation to multi-graph inputs led to 20–40% improvements in spin-analyzing power at 0.5–0.2 working efficiency compared to kinematic-only approaches (Dong et al., 1 Jul 2024).
  • Computational load is moderate (\sim366k parameters; GPU inference <1<1 ms/jet; CPU \sim23 ms/jet), competitive with mainstream alternatives (Qu et al., 2019).

ParticleNet’s EdgeConv layers are shown (via layerwise relevance propagation) to recover physically meaningful substructure observables, notably prong multiplicity and inter-subjet correlations, thereby validating its learned representations against traditional hadronic structure (Mokhtar et al., 2022).

5. Comparative Analysis and Limitations

Relative to alternative architectures:

  • DeepSets/EFN/PFN methods encode jets as flat sets with permutation-invariant pooling but lack explicit local relational modeling, leading to lower accuracy (by up to 15% on benchmark tasks) (Qu et al., 2019, Li et al., 2021).
  • CNN-based image models (ResNeXt-50, P-CNN) achieve comparable accuracy but lag in background rejection and parameter efficiency (Qu et al., 2019).
  • The rotational Particle Convolution Network (rPCN) introduces explicit rotation equivariance, achieving similar AUC but slightly trailing ParticleNet except at aggressive working points and in IRC-safe regimes (Shimmin, 2021).

Limitations of ParticleNet include:

  • Lack of built-in rotation equivariance; angular patterns must be learned afresh at each orientation.
  • Dynamic kNN graph construction (per layer) can be computationally expensive and memory-intensive for high particle multiplicity.
  • Guarantee of infrared and collinear (IRC) safety depends on input feature choices and network linearity with pTp_T (Shimmin, 2021).

6. Extensions, Robustness, and Future Directions

ParticleNet’s structure is adaptable across collider environments, detector geometries, and physics tasks:

  • Sensitivity studies reveal robustness to vertex detector configuration; most significant dependence is on inner layer radius, with less sensitivity to material budget and spatial resolution—an indication of strong pattern-recognition capability (Zhu et al., 2023).
  • Multi-graph extension for subjet-based tasks (top polarimetry) demonstrates utility beyond monolithic jet-level classification (Dong et al., 1 Jul 2024).
  • Potential improvements include accelerated kNN algorithms, attention-based pooling, incorporation of secondary vertex information, explicit representation of decay chains, adaptation to event-level reconstruction, and integration of systematic uncertainties (Li et al., 2021, Qu et al., 2019, Zhu et al., 2023).
  • The "particle cloud" paradigm may be extended to full-event graph architectures for broader tasks such as pileup mitigation, event classification, and grooming (Qu et al., 2019).

7. Interpretability and Physical Validation

Interpretability investigations using layerwise relevance propagation (LRP) demonstrate that ParticleNet learns jet substructure in a physically consistent manner:

  • After training, ParticleNet’s most relevant edges connect particles across subjets, correlating with known prong structure in top jets.
  • Distributions of high-relevance edge distances mimic traditional substructure observables.
  • The network’s physics alignment supports confidence in high-level classification outputs and motivates deployment in precision measurements (Mokhtar et al., 2022).

The architectural choices in ParticleNet—dynamic graph construction, permutation symmetry, localized edge convolutions, and hierarchical pooling—define its capacity to extract complex, physically meaningful features from collider data, providing both predictive accuracy and interpretability in high-energy physics analyses.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to ParticleNet.