Concave Scalability Functions
- Concave scalability functions are strictly increasing and concave real-valued functions that model diminishing returns in resource allocation, commonly used in parallel scheduling and network flows.
- They enable generalized water-filling methods and algorithms like SmartFill, offering efficient and tractable solutions for optimal resource allocation under diminishing returns.
- Applications include densest subgraph optimization and market equilibrium, where these functions guarantee polynomial-time solvability and robust performance improvements over traditional models.
A concave scalability function, typically called a concave speedup function or concave size function in the literature, is any real-valued function or which is strictly increasing and strictly concave on its domain, capturing the principle of diminishing returns in resource allocation. Concave scalability functions arise centrally in parallel scheduling, network flow, combinatorial optimization, and subgraph density maximization, enabling polynomial-time tractable algorithms and robust approximation schemes in scenarios where traditional (e.g., linear or submodular) models fall short. This entry surveys the formal properties, algorithmic consequences, and key applications of such functions.
1. Formal Definition and Properties
A concave scalability (speedup) function is a mapping %%%%2%%%%, where is an upper resource or load threshold. The function is required to satisfy:
- is strictly increasing: for all
- is strictly concave and continuously differentiable: for all , and the marginal speedup is strictly decreasing in (Li et al., 1 Sep 2025).
Several related domains use discrete analogs, such as a set-size scaling function , which is monotonically non-decreasing and satisfies the discrete concavity condition: This implies the average increment is non-increasing with (Kawase et al., 2017).
Canonical examples include:
- Power functions: , $0 < p < 1$ or ,
- Logarithmic functions: or
These functions model diminishing acceleration with additional allocated resources, larger set sizes, or throughput under network congestion.
2. Structural Rules and Generalized Water-Filling
A central structural result for optimization under concave scalability functions is the Consistent Derivative Ratio (CDR) rule:
$\frac{s'(\theta_i^*(t))}{s'(\theta_j^*(t))} = c_{i,j}\quad \text{(constant in %%%%23%%%%)}$
for all with positive allocations at any time (Li et al., 1 Sep 2025). This rule sharply distinguishes the optimality structure for concave (as opposed to linear or convex) speedup functions in parallel resource scheduling.
Optimal resource allocation under this rule reduces to a generalized water-filling (GWF) problem. For a set of jobs or flows, the aim is to allocate total resource among entities such that
The unique solution is obtained by inverting to determine per-job allocations, with closed-form solutions available for "regular" forms (e.g., power laws), and numerical inversion (via binary search) for arbitrary concave .
This GWF approach generalizes classical water-filling (as used in communications and network utility maximization) to handle arbitrary concave (but not necessarily affine or exponential) scaling (Li et al., 1 Sep 2025).
3. Algorithmic Schemes for Concave Scalability
For parallel scheduling with concave speedup functions, the SmartFill algorithm combines the CDR rule with GWF to compute the globally optimal resource allocation sequence that minimizes weighted sum of completion times. SmartFill operates backwards in stages (where is the number of jobs), at each stage solving a constrained allocation problem among the current active jobs and updating per-job "criticality factors" via the CDR.
The outline of SmartFill is:
- Jobs are scheduled in Shortest-Job-First order.
- For each active set, allocations solve a CAP imposed by the CDR and total resource.
- Closed-form updates are possible for power-law , general numerical solutions otherwise.
- Complexity is GWF subproblems, polynomial in (Li et al., 1 Sep 2025).
In subgraph density maximization with a concave size function , the -densest subgraph problem can be solved exactly via:
- A family of LPs (with variables and constraints each) and threshold rounding, leveraging the integral structure arising from discrete concavity (Kawase et al., 2017).
- A series of minimum – cut computations for unweighted graphs, reducing the ratio objective to submodular function minimization and thus to efficient combinatorial cuts.
A nearly-linear-time greedy peeling algorithm provides a constant-factor ($3$-approximation) for general weighted graphs. This guarantee critically depends on being concave, as it ensures the output density is always within factor $3$ of optimal.
In generalized flows with concave gains, scaling augmentation algorithms utilize the property that single-step marginal gains are nonincreasing, allowing for shortest-path augmentations and maintaining strong duality properties, with no need for explicit cycle-cancellation (Vegh, 2011).
4. Applications in Optimization and Equilibrium Computation
Concave scalability functions underpin several major problem classes:
- Parallel scheduling: Computing optimal allocation of resources (cores, bandwidth) for multiple jobs on cloud/edge systems with empirical or analytic sublinear speedup, using the SmartFill paradigm to exploit the CDR constraint (Li et al., 1 Sep 2025).
- Generalized network flows: Modeling arc-gains as concave (and increasing), enabling efficient, combinatorial -approximate solutions for market equilibrium (e.g., linear Fisher/exchange and Arrow–Debreu Nash bargaining) via local linearizations and primal-dual scaling (Vegh, 2011).
- Densest subgraph and set selection: Using concave -density for fine-grained control of subgraph size, with LP and flow-based exact algorithms, and greedy approximations, all leveraging discrete concavity to guarantee tractability (Kawase et al., 2017).
- Quasi-concave set optimization: Although quasi-concave is a distinct property, functions induced by monotone linkage—frequently with concave structure—admit globally optimal parallel algorithms without submodularity (Vepakomma et al., 2021).
These applications highlight concavity as a critical enabler of both strong structural properties (enabling exactness or polytime approximability) and robustness to modeling real-world diminishing returns.
5. Computational Complexity and Performance Guarantees
The introduction of concavity profoundly impacts algorithmic tractability:
- In scheduling, the SmartFill algorithm for general concave runs in calls to the GWF subroutine with optimality guaranteed for arbitrary differentiable and strictly concave ; prior approaches such as heSRPT are limited to power functions or require approximation (Li et al., 1 Sep 2025).
- In densest subgraph problems, concave enables polynomial-time LP-based and cut-based exact algorithms, and 3-approximation in near-linear time; convex functions lack such guarantees (Kawase et al., 2017).
- In generalized flows, the scaling-type algorithm for concave gains achieves overall time plus value oracle calls, enabling -approximate equilibria for a variety of market models (Vegh, 2011).
- For quasi-concave set functions, globally optimal maximization (via T-clusters and T-series) has complexity ranging from on processors to on processors, where is the cost of a linkage evaluation (Vepakomma et al., 2021).
These results delineate polynomial or quasi-linear complexity thresholds achievable under concavity but typically impossible (NP-hard or intractable) for more general, non-concave, or supermodular settings.
6. Theoretical and Practical Significance
Concave scalability functions formalize the notion of diminishing returns rigorously and enable the deployment of:
- Generalized water-filling as a unifying solution technique
- Primal–dual and scaling algorithms in network flows
- LP and combinatorial min-cut reductions in subgraph density
- Parallelizable, globally exact algorithms in set optimization (for certain dual quasi-concave classes)
In all applications, concavity acts as the “critical balance”: it preserves sufficient generality to accurately model heterogenous, real-world resource pooling phenomena, while guaranteeing algorithmic tractability and tight performance bounds. Empirical evaluations show substantial performance improvements (up to 13.6% lower mean-slowdown in cloud scheduling and robust improvements in subgraph density) compared to heuristics or models that rely on more restrictive functional forms (Li et al., 1 Sep 2025, Kawase et al., 2017).
A plausible implication is that many legacy or ad-hoc approaches to resource allocation, subgraph extraction, or equilibrium computation may be substantially outperformed by procedures recognizing and exploiting the structure guaranteed by concave scalability functions.
7. Connections and Extensions
Concave scalability functions are intimately connected to other domains where diminishing returns are crucial but where classical submodularity or linearity fails:
- Generalized flows: extendable to market equilibrium, bargaining, and fair division via concave gains (Vegh, 2011).
- Quasi-concave set functions: supporting parallel, globally optimal algorithms for maxi-min diversification problems, without recourse to submodularity, provided a suitable monotone linkage is identified (Vepakomma et al., 2021).
- Network resource allocation, rate control, and cooperative games: where concave payoffs model congestion, saturation, or bargaining power.
Recent literature has focused on further generalizing water-filling and primal–dual scaling to arbitrary concave differentiable forms, and characterizing tight approximability for various combinatorial and continuous optimization instances.
Concave scalability functions thus serve as a mathematical and algorithmic cornerstone in multiple areas across optimization, theoretical computer science, and applied market systems, enabling optimal or nearly-optimal solutions wherever diminishing returns are a dominant phenomenon.