Papers
Topics
Authors
Recent
Search
2000 character limit reached

Feasibility-Based Ranking

Updated 24 January 2026
  • Feasibility-based ranking is a method that orders algorithms by verifying feasibility before comparing objective values and runtime, ensuring valid outcomes even with missing data.
  • It employs a bi-objective lexicographical scheme that resolves ties and integrates nonparametric statistical tests, such as Friedman’s test, for rigorous performance analysis.
  • Applications in online scheduling and speed-scaling demonstrate its effectiveness in identifying optimal trade-offs under deadline and energy constraints.

Feasibility-based ranking refers to a class of methodologies that rank algorithms or solutions in computational experiments with explicit regard to feasibility—whether an algorithm is able to produce a valid solution within operational constraints—and, in certain contexts, other performance criteria such as objective values and computational time. These schemes are prevalent where algorithmic outcomes can include infeasible results due to problem complexity or time limitations, and where standard statistical evaluation is challenged by missing data, non-normality, or heteroscedasticity. Recent advances formalize bi-objective lexicographical ranking rules to ensure robust comparison and subsequent statistical inference, while feasibility-based rankings also drive theoretical and empirical ordering in online scheduling and speed-scaling with deadline constraints.

1. Formal Definitions and Mathematical Framework

Feasibility-based ranking, as introduced by Carvalho in computational experimentation contexts, is constructed upon two central data matrices: R=(rij)R = (r_{ij}) representing returned objective values for algorithm jj on instance ii, and T=(tij)T = (t_{ij}) denoting running times, with infeasible instances flagged by missing entries (NA). The ranking algorithm proceeds for each instance ii by sorting algorithms lexicographically on the vector (fij,rij,tij)(-f_{ij}, r_{ij}, t_{ij}), where fijf_{ij} marks feasibility (fij=1f_{ij}=1 if feasible, else $0$). The lexicographical comparator i\prec_i defines pairwise ordering:

(rij,tij)i(rik,tik)    {fij=1,fik=0 or fij=fik=1 and rij<rik or fij=fik=1,rij=rik,tij<tik(r_{ij}, t_{ij}) \prec_i (r_{ik}, t_{ik}) \iff \begin{cases} f_{ij} = 1, f_{ik} = 0 \ \text{or } f_{ij} = f_{ik} = 1 \text{ and } r_{ij} < r_{ik} \ \text{or } f_{ij} = f_{ik} = 1, r_{ij} = r_{ik}, t_{ij} < t_{ik} \end{cases}

Ties are resolved by averaging ranks for equivalent algorithms (identical objective and runtime among feasible, or collectively among infeasible). The bi-objective lexicographical ranking produces a rank matrix A=(aij)A = (a_{ij}), where each row corresponds to a benchmark instance and constitutes an ordinal ranking of algorithms (Carvalho, 2019).

2. Ranking Algorithms under Feasibility Constraints in Speed Scaling

In algorithmic speed-scaling with deadline feasibility constraints, feasibility-based ranking is foundational to both comparative analysis and algorithmic design. Given a sequence of jobs with release times, deadlines, and work requirements, deadlines impose feasibility: for job ii, the integrated computation assigned must meet or exceed wiw_i over [ri,di][r_i, d_i]. One key experimental and theoretical question is the ranking of online scheduling algorithms under energy minimization subject to feasibility.

Four principal online algorithms—qOA, OA, AVR, and BKP—are differentiated by their policies for processor speed assignment under earliest-deadline-first discipline. Their proven competitive ratio upper bounds generate an established feasibility-based ordering (for cube-root power law, α=3\alpha = 3), yielding qOA<OA<AVR<BKPq\text{OA} < \text{OA} < \text{AVR} < \text{BKP}. Across extensive empirical tests (real web server traces, diverse deadlines, spikiness), the experimental ranking coincides with competitive analysis: qOA consistently outperforms OA, AVR, and BKP in both energy consumption and compliance with feasibility (Abousamra et al., 2013).

3. Statistical Methods and Use of Rankings

Once generated, feasibility-based ordinal rankings are immediately suitable for nonparametric statistical tests that require only ordinal data and tolerate missing entries. Specifically, the resulting rank matrix AA can be used directly for Friedman’s test in a blocked design: each benchmark instance forms a block, each algorithm is a treatment, and the observed responses are ranks. Post-hoc procedures such as the Nemenyi test employ the average rank per algorithm:

aˉj=1ni=1naij\bar{a}_j = \frac{1}{n} \sum_{i=1}^n a_{ij}

This methodology allows formal hypothesis testing of algorithmic equivalence or superiority in the presence of non-normal data distributions and infeasibility (Carvalho, 2019).

4. Implementation Procedures and Computational Complexity

The bi-objective lexicographical ranking scheme is computationally tractable. For each instance (nn in total), all mm algorithms must be sorted according to a three-key comparator; standard algorithms yield complexity O(mlogm)O(m \log m) per instance, or O(nmlogm)O(n m \log m) in total. Essential considerations include proper management of ties (by objective and time among feasibles, global tie for infeasibles) and consistent handling of missing entries by feasibility flag rather than sentinels (Carvalho, 2019).

5. Comparative Analysis with Alternative Schemes

Feasibility-based ranking supplants several commonly used alternatives:

  • PAR10: Penalized average runtime treats unsolved problems as 10×10\times cutoff; this introduces pronounced outliers and yields a single-metric outcome.
  • Expected Runtime (ERT): Combines successes and failures into a real-valued expected evaluation count without explicit regard to solution quality.
  • Multiple-criteria dominance tests: Aggregate performance over several measures using probabilistic dominance, offering more nuanced but statistically demanding frameworks.

Compared to these, the bi-objective lexicographical ranking’s main strengths are nonparametric applicability, direct inclusion of infeasibility (missing data), and suitability for nonparametric tests. Its limitations include disregard for the magnitude of differences after ordering and restriction to two ranking criteria unless extended via multi-dimensional rules (Carvalho, 2019).

6. Illustrative Examples and Case Studies

A typical ranking scenario involving three algorithms (A, B, C) on a single instance with outcomes (10,2),(10,2),NA(10,2), (10,2), \mathrm{NA}, respectively, produces ranks (1.5,1.5,3)(1.5, 1.5, 3): A and B tie in both objective and time as feasibles, C is classified as infeasible and receives the maximal rank (Carvalho, 2019). In speed-scaling applications, extensive simulations confirm the feasibility-based ranking aligns rigorously with theoretical competitive analysis; for instance, qOA’s energy usage closely tracks the offline optimum, while OA, AVR, and BKP incur successively larger penalties under diverse workloads and power objectives (Abousamra et al., 2013).

7. Scope, Limitations, and Domain Insights

Feasibility-based ranking is robust across domains where infeasibility and missing data are prevalent, notably in computational heuristics for hard optimization problems and online scheduling. The main constraints are its insensitivity to the magnitude of performance difference once ordered and the reliance on only two objectives in standard form; more elaborate multi-dimensional dominance frameworks may be necessary for finer comparative assessment. Both theory and large-scale empirical benchmarking validate its utility and reliability in ranking, statistical inference, and algorithmic selection (Carvalho, 2019, Abousamra et al., 2013).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Feasibility Based Ranking.