Joint Metrics: minJADE & minJFDE
- The paper introduces joint metrics minJADE and minJFDE to evaluate the collective accuracy of multi-agent predictions by selecting the best complete scenario.
- It contrasts joint metrics with marginal metrics, emphasizing that only scenarios where all agents have low errors are rewarded, ensuring coherent group behavior.
- The methodology applies across domains such as multi-agent trajectory forecasting and noncoherent communications, highlighting both computational strategies and practical implications.
Joint metrics such as minJADE and minJFDE are performance measures tailored to evaluate the quality of joint predictions over multiple agents or interacting subsystems, with a particular focus on settings where coherence or collective accuracy is crucial. These metrics have found substantial adoption in multi-agent trajectory forecasting, noncoherent multiuser communications, integrated sensing-communication systems, and related domains. Unlike per-agent or marginal metrics that select each entity's best prediction independently, joint metrics select the best overall scenario across all agents, thereby directly quantifying the ability of models to generate coherent, plausible, and compatible joint outcomes.
1. Formal Definitions and Mathematical Structure
Let denote the number of agents, the prediction horizon, and the number of sampled joint predictions (scenarios) per instance. Given ground-truth agent trajectories and complete joint predictions , with the predicted trajectory for agent in scenario , the joint metrics are defined as follows (Teoh, 23 Nov 2025):
$\minJADE_k = \min_{1\leq j \leq k} \left( \frac{1}{N F} \sum_{i=1}^N\sum_{t=P+1}^T \left\| \hat x^j_{i,t} - x_{i,t} \right\| \right)$
$\minJFDE_k = \min_{1\leq j \leq k} \left( \frac{1}{N} \sum_{i=1}^N \left\| \hat x^j_{i,T} - x_{i,T} \right\| \right)$
In contrast, per-agent metrics (minADE, minFDE) select each agent’s best trajectory among the samples independently prior to averaging, leading to potential incoherence in sampled scenarios. In the joint setting, the minimum is taken across full scenarios—each scenario being a tuple of trajectories, one for each agent. This design forces the metric to reward only those samples in which all agents simultaneously achieve low errors, thereby measuring genuine joint accuracy.
2. Comparison with Marginal Metrics and Rationale
The key distinction between joint and marginal (per-agent) metrics lies in the placement of the minimization operation. Marginal metrics such as minADE or minFDE
$\minADE_k = \frac{1}{N}\sum_{i=1}^N \min_{1 \leq j \leq k} \left( \frac{1}{F}\sum_{t=P+1}^T \| \hat x^j_{i,t} - x_{i,t}\| \right)$
allow each agent’s minimal-error prediction to originate from a different scenario . When is large, a model can cover individual agent futures well without ever generating globally plausible multi-agent outcomes. Conversely, minJADE and minJFDE enforce that all agents’ errors be small within the same scenario, penalizing incoherence and rewarding genuine multi-agent coordination and compatibility (Teoh, 23 Nov 2025). In multi-agent systems such as team sports, this coherence is critical—model outputs must represent plausible and physically compatible group behavior rather than a collage of optimal single-agent outcomes.
3. Implementation and Computational Details
The computation of minJADE and minJFDE for a given instance proceeds as follows (Teoh, 23 Nov 2025):
- For each scenario index , compute its agent-averaged displacement errors:
- Compute for all ; aggregate across all and for scenario to obtain .
- For final displacement, compute for all ; average over for scenario to give .
- Take the minimum over all scenarios:
- $\minJADE_k = \min_{j=1..k} E^j_{\text{JADE}}$
- $\minJFDE_k = \min_{j=1..k} E^j_{\text{JFDE}}$
A central requirement is that the minimization is global over complete agent-tuples, not per-agent; no mixing of scenario indices across agents is permitted. This approach is both mathematically and algorithmically straightforward, requiring only an outer loop over and mean computations over agents and time for each .
4. Domains of Application
Multi-Agent Trajectory Forecasting
minJADE and minJFDE have been established as evaluation standards in multi-agent trajectory forecasting for domains with nontrivial group dynamics, such as team sports (NBA basketball, Football-U, Basketball-U datasets) (Teoh, 23 Nov 2025). In these contexts, the metrics directly test models for their capacity to generate at least one sample in which all agents' futures are compatible and collectively realistic. Empirical evaluation demonstrates that models, such as CausalTraj, designed explicitly for joint coherence, achieve lower joint errors—indicating more plausible scenario generation—relative to models optimized only for marginal metrics.
Noncoherent Multiuser Communications
Analogous joint metrics structure arises in noncoherent multiuser MIMO MACs, where the underlying objective is to minimize the worst-case pairwise error probability exponent. Here, “minJADE” is defined as the maximization—over all codebooks—of the minimum log-determinant distance between codeword pairs (Ngo et al., 2020):
where are eigenvalues of .
The high-SNR simplification, termed “minJFDE”, focuses on maximizing the minimum trace-distance:
Both metrics are deeply related to the geometry of the positive definite Hermitian matrix manifold, with and quantifying codeword distinguishability. Optimization over these joint distances guarantees that no two scenarios yielded by the codebook are close in the signal manifold, thereby ensuring robust joint decodability in the presence of noise and interference.
Dual Function Radar-Communication Systems
In integrated radar-communication systems, joint metrics formalize the probability that both radar and communication objectives are achieved simultaneously in the same system configuration (Moulin et al., 2022). For instance, the JRDCCP (Joint Radar Detection & Communication Coverage Probability) and JRSCCP (Joint Radar Success & Communication Coverage Probability) evaluate the fraction of system realizations where both signal-to-interference requirements are jointly met. These metrics are tightly related (up to reparametrization and structure) to minJADE/minJFDE as they quantify the probability that system demands across modalities are met simultaneously—not marginally.
5. Empirical Significance and Evaluation
Experiments on multi-agent sports datasets demonstrate that minJADE and minJFDE can highlight substantial differences in model coherence undetectable by per-agent metrics. For example, CausalTraj and its variants are able to achieve both strong marginal accuracy and state-of-the-art joint metric performance, whereas methods optimized solely for minADE or minFDE may yield low marginal errors but much higher joint errors (Teoh, 23 Nov 2025). Figure 1 and Table 1 in (Teoh, 23 Nov 2025) depict instances where only CausalTraj generates truly coordinated group behaviors such as passes or joint cuts within a single scenario, which are not realized by per-agent-wise optimal baselines.
In the context of joint constellation design, optimizing for minJADE or minJFDE guarantees that, regardless of channel realization or power allocation, the minimal codeword separation—and hence the worst-case pairwise error performance—remains maximized. Numerical optimization is typically accomplished via Riemannian gradient methods on products of Grassmann manifolds, employing smooth surrogates to handle the max-min structure.
6. Limitations and Interpretability
Joint metrics such as minJADE and minJFDE are inherently “hard” metrics—success is measured by the best scenario sampled, not by probabilistic coverage or expected utility. While they directly encourage coherence, they may be trivially minimized by degenerate models that collapse predictions, e.g., a model outputting only a single deterministic mean trajectory per agent. To mitigate this, the “min” form is preferred over “mean” to encourage diversity over repeated sampling, but an overly concentrated model can still score well on these metrics in special cases. Moreover, these metrics do not directly test for “physical plausibility”—they only ensure scenario-level joint error minimization. For comprehensive model evaluation, joint accuracy should be considered alongside domain-specific constraints and qualitative assessment of generated samples.
7. Extensions and Connections
The core architecture of joint metrics—minimizing groupwise errors across full multi-agent samples rather than marginals—has analogues across multiple domains:
- In automotive integrated sensing-communication systems, joint SIR event probabilities play a similar role in certifying that multi-functional system constraints are simultaneously satisfied (Moulin et al., 2022).
- Within MAC codebook design, maximizing minimal pairwise Riemannian-geometric distances ensures that no two users' codewords are confusable, a direct analogue to requiring at least one “well-separated" joint scenario (Ngo et al., 2020).
- Additional extensions include user-wise decompositions, power-allocation optimization (which may reduce to solving cubic equations in two-user cases), and application of joint metrics to scenarios with partial interference cancellation or imperfect system modeling.
Across these contexts, joint metrics serve as principled tools for enforcing collective compatibility and robustness, addressing the common failure mode of marginally accurate but incoherent multi-entity predictions. Their adoption reflects a broader shift toward explicitly evaluating the joint structure and realism of complex multi-entity outputs.