Discriminant Gain in ISAC and Edge AI Systems
- Discriminant Gain is a metric that quantifies class separability using Gaussian formulations, offering a clear measure of inference accuracy and error bounds.
- It replaces traditional MSE criteria by optimally allocating power via closed-form water-filling solutions, focusing on the most discriminative features.
- DG governs the trade-off between sensing and communication, enabling efficient system design and benchmarking in ISAC and ISEA applications.
Discriminant Gain (DG) quantifies class separability in feature space and has recently become a core metric for characterizing inference performance in task-oriented integrated sensing and communication (ISAC) and integrated sensing and edge AI (ISEA) systems. Unlike classical mean squared error (MSE)-based criteria, DG directly connects with detection-theoretic limits, admitting tractable expressions and closed-form optimization in Gaussian mixture settings. It governs the tradeoff between sensing and communication resources, optimally allocates power to maximize inference accuracy, and enables the principled design and benchmarking of ISAC/ISEA pipelines.
1. Formal Definition and Mathematical Foundations
DG is defined under the assumption that feature vectors for each class follow a complex Gaussian distribution. The pairwise discriminant gain between classes and is
where is the mean of class and the feature covariance.
For multiclass problems, governs the worst-case separability, serving as a lower-complexity surrogate for inference-oriented system design (Dong et al., 23 Oct 2025).
Alternative formulations use the symmetric Kullback-Leibler divergence in multi-view edge AI settings. For views indexed by and as the subspace projection at sensor ,
with the shared covariance (Chen et al., 2023).
2. Connection to Inference Error Bounds
DG provides a tight link to Bayesian inference errors. For two classes and one-dimensional features,
where is the Gaussian Q-function. The relation extends to vector features and multiclass cases using the minimum pairwise DG, confining the inference error probability as
Thus, increasing DG—particularly —monotonically decreases a lower bound on the inference error probability, establishing DG as a critical system-level surrogate (Dong et al., 23 Oct 2025).
In multi-view settings, DG predicts the exponential rate at which the entropy (uncertainty) of the predicted class distribution decays with the number of aggregated views . The global DG exponent controls how rapidly
where is an entropy surrogate, a constant, and the asymptotic average DG (Chen et al., 2023).
3. Task-Oriented DG Maximization and System Models
DG-centric system optimization replaces MSE with the direct maximization of DG under power/resource constraints. In the compress-and-estimate ISAC link, each transformed feature is transmitted over a fading channel, and the effective per-subcarrier DG is
where is the transmission gain, the channel coefficient, and the communication noise.
The DG-maximization under a total power budget becomes
with (Dong et al., 23 Oct 2025).
In edge AI, local and global DGs are constructed via symmetric KL divergence in the projected subspace, and DG's subspace geometry determines how well class means are separated across the pooled sensor network (Chen et al., 2023).
4. Closed-Form DG-Optimal Power Allocation and Water-Filling Structure
The DG-maximization problem is convex in the allocated per-feature powers and admits a closed-form, water-filling-type solution: with a Lagrange multiplier for the power constraint.
Letting ,
Power is assigned only to features with sufficiently high discrimination-to-noise ratios, turning off weak subcarriers and concentrating resources on the most informative dimensions. This distinguishes DG-water-filling from its MSE-based counterpart, which allocates power more uniformly, even to low-informative features (Dong et al., 23 Oct 2025).
5. Comparison with MSE-Optimal and Traditional Criteria
Under MSE-optimal design, the system solves
with a similar water-filling solution but lacking the emphasis on discrimination power: The DG-optimal approach introduces an extra factor , biasing power allocation toward features with higher class separability. In the low-SNR regime, DG-maximization achieves substantially better power efficiency by shutting off weak subcarriers. In the high-SNR regime, the distinction between DG- and MSE-optimal allocations vanishes as all channels are used and the benefit per dB equalizes (Dong et al., 23 Oct 2025).
6. Multi-View Aggregation, Channel Effects, and Discriminant Loss
In ISEA, aggregated DG grows linearly with the number of views/sensors, and the geometry of the pooled subspace (as determined by ) controls overall class separability: Sensing uncertainty, measured by entropy surrogates, decays exponentially with the product of the global discriminant gain and the number of views: When transmission occurs over noisy (e.g., AirComp) channels, channel-induced discriminant loss attenuates DG: The uncertainty scaling law becomes , where quantifies the effective reduction in discriminability caused by channel noise or distortion (Chen et al., 2023).
7. Operational Insights: Power-Efficient Inference and Resource Tradeoffs
DG-optimal designs yield several system-level advantages:
- Selective Feature Activation: Simulation and empirical studies confirm that for fixed inference accuracy, DG-based resource allocation requires substantially less power than MSE-optimal design by focusing on the most discriminative features or views (Dong et al., 23 Oct 2025).
- Sensing-Communication Tradeoff: Power savings achieved by DG-optimal communication can be redirected to improve sensing quality, leading to joint design strategies for radar and communication subsystems (Dong et al., 23 Oct 2025).
- Adaptive Aggregation and Access Mode: In multi-view settings, the exponential convergence rate remains until attenuated by channel effects. Adaptive switching between over-the-air computing and orthogonal access based on the ratio (number of antennas to sensors) allows the system to maintain high global DG and rapid uncertainty decay (Chen et al., 2023).
DG thus underpins the design of resource-constrained, inference-optimal ISAC and ISEA links, establishing a unified metric that generalizes across model classes, channel effects, and practical hardware constraints.
References: (Dong et al., 23 Oct 2025, Chen et al., 2023)