Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 210 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Explainable Throughput Decomposition (ETD)

Updated 25 September 2025
  • ETD is a framework that decomposes system throughput into interpretable contributions from individual events using game-theoretic Shapley values and statistical guarantees.
  • The approach leverages Monte Carlo estimation and convexity-based Jensen gap bounds to quantify performance deviations in computing and network environments.
  • ETD is applied in wireless networks, energy-efficient RAN, and time series forecasting, enabling transparent resource management and optimized system tuning.

Explainable Throughput Decomposition (ETD) refers to frameworks and methodologies that decompose throughput—a core performance metric in computing, wireless networks, and systems—into interpretable attributions to governing events, decisions, or parameters. Central themes include the application of Shapley value theory, convexity gap bounds, and Monte Carlo quantification to produce efficiency-preserving resource and performance analyses. ETD is foundational for rendering opaque system throughput metrics transparent, enabling rigorous attribution of throughput changes to constituent components or decision agents.

1. Formal Foundations of Explainable Throughput Decomposition

ETD operates on the principle that throughput, defined as the inverse of CPI (cycles per instruction), reflects a summation of base system cost and additive penalties from individual events. The throughput model is given as:

CPI=B+i=1kZi,Throughput (TP)=1B+i=1kZi\text{CPI} = B + \sum_{i=1}^k Z_i, \qquad \text{Throughput (TP)} = \frac{1}{B + \sum_{i=1}^k Z_i}

where B>0B > 0 represents the baseline cost and Zi0Z_i \geq 0 quantifies the penalty for event EiE_i (Alpay et al., 23 Sep 2025).

ETD applies cooperative game theory by defining for any subset of events S{1,,k}S \subseteq \{1, \ldots, k\} the value function

v(S)=E[1B+iSZi],v(S) = E\left[ \frac{1}{B + \sum_{i \in S} Z_i} \right],

so that v()=1/Bv(\varnothing) = 1/B denotes baseline throughput. Shapley value theory then specifies the unique efficiency-preserving decomposition: for each event ii,

ETDi=φi(v)\text{ETD}_i = \varphi_i(v)

ETD yields decompositions such that the total attribution matches the performance gap, enforcing the efficiency axiom:

iETDi=v({1,,k})v()\sum_{i} \text{ETD}_i = v(\{1, \ldots, k\}) - v(\varnothing)

This guarantees that all variations in observed throughput attributable to events are exactly distributed among those events (Alpay et al., 23 Sep 2025).

2. Shapley Value Computation and Statistical Guarantees

The Shapley value for an event ii is defined as

φi(v)=SF{i}S!(FS1)!F![v(S{i})v(S)]\varphi_i(v) = \sum_{S \subseteq F \setminus \{i\} } \frac{|S|! \, (|F| - |S| - 1)!}{|F|!} \left[ v(S \cup \{i\}) - v(S) \right]

where FF is the full set of features/events. For complex systems, direct computation is infeasible as the event count grows, necessitating Monte Carlo estimation over random permutations (Alpay et al., 23 Sep 2025). Given interval width BiB_i for the marginal contributions, Hoeffding’s inequality supplies the non-asymptotic sample complexity guarantee:

MBi22ε2log(2δ)M \geq \frac{B_i^2}{2 \varepsilon^2} \log\left(\frac{2}{\delta}\right)

ensuring that with probability at least 1δ1-\delta the ETD estimate deviates by no more than ε\varepsilon from the true value. This places ETD on rigorous statistical footing, making error quantification intrinsic to throughput attribution workflows.

3. Convexity and Jensen Gap Bounds

Because throughput reciprocates an affine penalty sum, its convexity is exploited for error analyses. The paper establishes Jensen gap bounds:

1B+E[Zi]E[1B+Zi]1B+E[Zi]+Var(Zi)B3\frac{1}{B + E[\sum Z_i]} \leq E\left[ \frac{1}{B + \sum Z_i} \right] \leq \frac{1}{B + E[\sum Z_i]} + \frac{\mathrm{Var}(\sum Z_i)}{B^3}

with the tighter two-sided bounds:

12Var(Zi)(B+M)3E[1B+Zi]1B+E[Zi]12Var(Zi)B3\frac{1}{2} \frac{\mathrm{Var}(\sum Z_i)}{(B + M)^3} \leq E\left[\frac{1}{B + \sum Z_i}\right] - \frac{1}{B + E[\sum Z_i]} \leq \frac{1}{2}\frac{\mathrm{Var}(\sum Z_i)}{B^3}

These bounds are critical for understanding how the expectation of throughput deviates from its mean-field approximation under system noise and event variability (Alpay et al., 23 Sep 2025).

4. ETD in Wireless Networks and Energy-Efficient RAN

In wireless systems and Open Radio Access Networks (Open RAN), ETD techniques are used to attribute throughput changes to system parameters such as airtime, goodput, buffer status report, and resource blocks (Malakalapalli et al., 25 Apr 2025). Explainable AI techniques—SHAP and LIME—decompose model predictions (such as power consumption) into contributions from throughput and related metrics:

  • LIME approximates f(x)f(x) locally with an interpretable model g(x)g(x), optimized via

mingG  L(f,g,πx)+Ω(g)\min_{g \in G} \; L(f,g,\pi_x) + \Omega(g)

  • SHAP applies the Shapley value formula as described above.

The analysis enables the diagnosis of how increases in throughput, airtime, or buffer occupancy drive energy consumption, revealing detailed interrelationships that can guide targeted protocol optimizations for energy efficiency. A plausible implication is that this enables fine-grained adaptation of RAN parameters in response to throughput-induced energy dynamics.

5. Multi-Agent Network Slicing and Prioritized Value Decomposition

In 5G and beyond network slicing, ETD frameworks are instantiated via multi-agent Q-learning and prioritized value decomposition (Salehi et al., 27 Jan 2025). The Prioritized Value-Decomposition Network (PVDN) proposes

Q({hi},{ai})i{MSMA,USMA}Qi(hi,ai)Q(\{h^i\}, \{a^i\}) \approx \sum_{i \in \{\text{MSMA}, \text{USMA}\}} Q^i(h^i, a^i)

where hih^i are histories, aia^i actions of slice agents (e.g., eMBB and URLLC). PVDN introduces prioritization via

r=ωUSMA(rUSMAβΔBMSMAeMBB,avg)+ωMSMA(rMSMA(1β)ΔDUSMAURLLC,avg)r = \omega_{USMA} \cdot (r_{USMA} - \beta \cdot \Delta B^{eMBB,avg}_{MSMA}) + \omega_{MSMA} \cdot (r_{MSMA} - (1-\beta) \cdot \Delta D^{URLLC,avg}_{USMA})

The adaptive trade-off parameter,

β=ΔDURLLCΔDURLLC+ΔBeMBB\beta = \frac{|\Delta D^{URLLC}|}{|\Delta D^{URLLC}| + |\Delta B^{eMBB}|}

balances latency reduction and throughput maximization. ETD here supports isolation and interpretability of slice-specific decisions: resource allocation consequences (improvement of throughput by up to 67% and latency reduction by up to 35% compared to baselines) can be traced directly to slice management agent contributions.

6. Practical Applications and Interpretability in Time Series

Explainable throughput decomposition is closely related to time series forecasting models with intrinsic interpretability properties. The DeLELSTM model (Wang et al., 2023) decomposes LSTM hidden states into instantaneous (βt\beta_t) and long-term (αt\alpha_t) effects

Hti=1D[αtiht1i+βti(htiht1i)]H_t \approx \sum_{i=1}^{D} \left[\alpha_t^i h_{t-1}^i + \beta_t^i (h_t^i - h_{t-1}^i)\right]

producing interpretable attributions for each input variable over time. Quantities such as instantaneous importance

Intd=βtdαtd+βtd\text{In}_t^d = \frac{|\beta_t^d|}{|\alpha_t^d| + |\beta_t^d|}

allow users to distinguish whether rapid prediction changes arise from new input arrivals or are driven by longer-term trends—a distinction essential in finance, healthcare, and energy forecasting.

7. Summary and Implications

Explainable Throughput Decomposition unifies game-theoretic Shapley attribution, convex analysis, and decomposed machine learning frameworks to rigorously attribute throughput in computing and networked systems. By providing efficiency-preserving, interpretable, and statistically guaranteed decomposition, ETD is foundational for transparent resource management, performance diagnosis, and adaptive system optimization in domains spanning computer architecture, wireless networks, and time series analysis.

ETD Application Domain Methodological Pillars Performance/Interpretability Gains
Computer Systems (CPI, TP) Shapley value, Jensen gap Rigorous attribution, error bounds
Open RAN, Wireless Networks SHAP, LIME, Multi-agent PVDN Transparent resource allocation, energy efficiency
Time Series Forecasting DeLELSTM linear decomposition Instantaneous vs long-term effect separation

These developments enable both theoretical robustness and operational transparency, supporting both the diagnosis of performance bottlenecks and the design of interpretable control policies. The efficiency-preserving property, statistical guarantees, and multi-agent decompositions make ETD a core technique for transparent, explainable, and reliable system operation.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Explainable Throughput Decomposition (ETD).