Papers
Topics
Authors
Recent
Search
2000 character limit reached

Void Minimization Scheduling in PONs

Updated 7 February 2026
  • Void minimization scheduling in PONs is defined as the consolidation of upstream grant transmissions to reduce idle intervals and enhance energy efficiency while meeting QoS requirements.
  • Techniques such as the EO-NoVM protocol employ real-time void detection and grant rescheduling to optimize OLT receiver sleep cycles and minimize packet delays.
  • Practical deployment guidelines focus on proper ONU report placement, wavelength dimensioning, and managing dynamic network conditions for effective energy and delay performance.

Void minimization scheduling in Passive Optical Networks (PONs) encompasses a set of techniques and algorithms designed to reduce or consolidate the idle time—termed "voids"—between scheduled upstream transmissions from distributed Optical Network Units (ONUs) to the central Optical Line Terminal (OLT). These methods are especially critical in Time and Wavelength Division Multiplexed PONs (TWDM-PONs), where energy efficiency and packet delay are closely tied to the scheduling of upstream grants and the distribution of voids. Current approaches have shifted from minimizing the number of active wavelengths to directly minimizing the number and duration of voids, significantly improving energy savings and latency performance under stringent Quality of Service (QoS) constraints.

1. Mathematical Foundation of Void-Minimization in PONs

Let NN be the number of ONUs, WW the number of wavelengths, ramr_a^m the per-ONU maximum average arrival rate, L∈[0,1]L\in[0,1] the offered load, and rdr_d the per-wavelength service rate. The utilization factor is defined by

ρ=NramLWrd\rho = \frac{N r_a^m L}{W r_d}

Across the observation interval Tobsā†’āˆžT_{\text{obs}}\rightarrow\infty, the aggregate OLT receiver idle time is given by

Tvagg=(1āˆ’Ļ)Tobsāˆ’M(NRT+Tg)T_v^{\text{agg}} = (1-\rho) T_{\text{obs}} - M (N_R T + T_g)

where MM is the total number of REPORT-guard pairs, NRN_R is REPORT size, TT the byte duration, and TgT_g the guard interval. Under typical MPCP/Gated DBA frameworks, each grant can produce at most one void, so the void count VV across all wavelengths is

V=āˆ‘j=1W(mjāˆ’1)=Māˆ’WV = \sum_{j=1}^W (m_j - 1) = M - W

with mjm_j denoting grants on wavelength jj. The chief scheduling objective is thus to minimize VV, consolidating grants to reduce the number and fragmentation of idle periods (Dutta et al., 2017).

2. Scheduling Goals and Delay Constraints

Void minimization in energy-efficient OLT design for TWDM-PONs pursues two intertwined goals: maximizing OLT-receiver sleep time and ensuring per-ONU (or per-flow) QoS-compliant delay bounds. The former is achieved by both retaining all WW wavelengths (maintaining low ρ\rho and thus more total idle time) and consolidating grants so that each void is maximized in length, allowing efficient device sleep cycles. Delay is controlled by imposing a per-ONU delay bound Dmax⁔kD_{\max}^k, with the worst-case inter-REPORT interval DqkD_q^k satisfying

Dmax⁔k≄Dqāˆ’1k+Dqk+Trttk2D_{\max}^k \geq D_{q-1}^k + D_q^k + \frac{T_{\text{rtt}}^k}{2}

A fixed per-REPORT bound Dqk=Dconstk=Dmax⁔kāˆ’Trttk/22D_q^k = D_{\text{const}}^k = \frac{D_{\max}^k - T_{\text{rtt}}^k/2}{2} is adopted for scheduling. Each upstream grant is timed so that data arrives at or before tRk,q+Dqkt_R^{k,q}+D_q^k (Dutta et al., 2017).

3. Algorithmic Frameworks and Complexity

The EO-NoVM protocol exemplifies online scheduling for void minimization. Upon receiving a REPORT at time tRk,qt_R^{k,q}:

  1. Calculate earliest-arrival candidate: For each candidate wavelength jj, determine TCmin⁔k,jTC_{\min}^{k,j}, factoring in gate processing, grant transmission, ONU-wavelength tuning Ttwck,jT_t^{w_c^k,j}, and RTT.
  2. Grant window computation: Twk=GkT+NRT+TgT_w^k = G_k T + N_R T + T_g
  3. Void identification: Identify valid voids VvalidV_{\text{valid}} of sufficient length for grant insertion.
  4. Post-horizon scheduling: For cases lacking suitable voids, establish valid post-horizon time slots (LFvalidLF_{\text{valid}}) on each wavelength.
  5. Priority scheduling: Prefer filling existing voids. If unavailable, allocate to the latest possible slot within valid horizons.
  6. Delay–violation fallback: Revert to earliest-finish scheduling if neither voids nor horizons satisfy constraints.

The void and horizon lists are maintained per wavelength and can be searched or updated in O(N+log⁔W)O(N + \log W) per REPORT, with O(N+W)O(N + W) space requirements, allowing online, scalable operation (Dutta et al., 2017).

4. Energy Efficiency: Upper Bounds, Simulation Results, and Mechanisms

In the ideal (fully consolidated) case with a single void, OLT receiver activity fraction is ρ=(NramL)/(Wrd)\rho = (N r_a^m L)/(W r_d), so the maximum achievable energy efficiency is

Emax⁔=[1āˆ’NramLWrd]ā‹…100%E_{\max} = \left[1 - \frac{N r_a^m L}{W r_d}\right] \cdot 100\%

EO-NoVM achieves energy efficiency within 2% of this bound for N=64N=64 ONUs, compared to substantial shortfalls by wavelength-minimization schemes. At typical high loads (L∈[0.8,0.9]L\in[0.8, 0.9]), EO-NoVM improves receiver energy efficiency by approximately 25%, directly attributable to the reduced number of voids (hence reduced sleep-to-wake overhead per scheduling cycle) (Dutta et al., 2017).

The energy benefit of void minimization arises because only idle periods exceeding the sleep–wakeup overhead TswT_{\text{sw}} enable the OLT receiver to enter low-power mode. Each void triggers a sleep–wake cycle, incurring TswVT_{\text{sw}} V time loss over all voids. Consolidating idle periods reduces VV and thus this loss.

5. Impact of Report Message Position and DBA Regimes on Void Minimization

The positioning of REPORT messages within each ONU upstream transmission is a key factor in void minimization for legacy EPON and GPON. Placing the report at the beginning instead of the end of a burst reduces the unmasked propagation delay (idle time) by up to the payload transmission time of the ONU. Under offline, single-thread polling–then–grant DBAs, this can yield up to 10āˆ’20 ms10-20\,\text{ms} delay reduction at high load for "gated" grant sizing. Online and interleaved DBA mechanisms, by contrast, are inherently low-void and see negligible difference from report placement due to their immediate grant processing and void-masking characteristics (Mercian et al., 2013).

Ī”V=Vendāˆ’Vbegin=(payloadĀ time)āˆ’tR\Delta V = V_{\text{end}} - V_{\text{begin}} = \textrm{(payload time)} - t_R

Empirical results show in single-thread offline polling, Vendā‰ˆ2Ļ„/OV_{\text{end}} \approx 2\tau/O (e.g., 31 μs31\,\mu\text{s} for 2Ļ„=1 ms2\tau=1\,\text{ms}, O=32O=32); report-at-beginning grants Vbeginā‰ˆ(2Ļ„āˆ’Gmax⁔)/OV_{\text{begin}} \approx (2\tau - G_{\max})/O, often reducing voids by several μs\mu\text{s} per ONU and total packet delay by up to tens of ms at high load (Mercian et al., 2013).

6. Practical Deployment and Design Guidelines

Designing a PON to optimally exploit void-minimizing scheduling requires:

  • Dimensioning WW such that NramL/(Wrd)≤0.8Nr_a^m L/(W r_d) \le 0.8 under peak load, maximizing available idle time for consolidation.
  • Setting per-ONU Dmax⁔kD_{\max}^k per expected service constraints, deriving DconstkD_{\text{const}}^k accordingly.
  • Calibrating OLT receiver transition times TswT_{\text{sw}} (hardware-specific).
  • Maintaining per-wavelength, per-ONU sorted horizon lists to enable fast scheduling decisions.
  • For offline DBAs, always configure reports at burst start ("begin") for maximum void reduction, except where traffic freshness requires end-of-burst reporting; advanced hybrid schemes have diminishing returns in practical settings.
  • For online/interleaved DBAs, void minimization via report placement has negligible effect, and focus should be on fast scheduling granularity.

Self-similar, high-burstiness traffic should be used in simulation to accurately characterize trade-offs in energy and delay domains. Combining OLT void-minimization with ONU-side doze/sleep protocols yields maximal end-to-end power savings (Dutta et al., 2017, Mercian et al., 2013).

7. Research Directions and Limitations

Current analyses assume static RTT, constant processing/guard times, and ignore ONU-side wakeup overhead for early/begin reporting. Advanced prediction of per-ONU traffic arrivals, integration with multi-class QoS, mixed-access PONs, and WDM/TDM hybrids remain open issues. Expanding void minimization frameworks to incorporate dynamic or per-ONU adaptive report timing in combination with predictive algorithms is a recognized direction for further improvement, along with a more granular study of energy-delay trade-offs in practical field deployments (Dutta et al., 2017, Mercian et al., 2013).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Void Minimization Scheduling for PONs.