Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Fair Division: Towards Ex-Post Constant MMS Guarantees (2503.02088v1)

Published 3 Mar 2025 in cs.GT, cs.DS, and cs.MA

Abstract: We investigate the problem of fairly allocating $m$ indivisible items among $n$ sequentially arriving agents with additive valuations, under the sought-after fairness notion of maximin share (MMS). We first observe a strong impossibility: without appropriate knowledge about the valuation functions of the incoming agents, no online algorithm can ensure any non-trivial MMS approximation, even when there are only two agents. Motivated by this impossibility, we introduce OnlineKTypeFD (online $k$-type fair division), a model that balances theoretical tractability with real-world applicability. In this model, each arriving agent belongs to one of $k$ types, with all agents of a given type sharing the same known valuation function. We do not constrain $k$ to be a constant. Upon arrival, an agent reveals her type, receives an irrevocable allocation, and departs. We study the ex-post MMS guarantees of online algorithms under two arrival models: 1- Adversarial arrivals: In this model, an adversary determines the type of each arriving agent. We design a $\frac{1}{k}$-MMS competitive algorithm and complement it with a lower bound, ruling out any $\Omega(\frac{1}{\sqrt{k}})$-MMS-competitive algorithm, even for binary valuations. 2- Stochastic arrivals: In this model, the type of each arriving agent is independently drawn from an underlying, possibly unknown distribution. Unlike the adversarial setting where the dependence on $k$ is unavoidable, we surprisingly show that in the stochastic setting, an asymptotic, arbitrarily close-to-$\frac{1}{2}$-MMS competitive guarantee is achievable under mild distributional assumptions. Our results extend naturally to a learning-augmented framework; when given access to predictions about valuation functions, we show that the competitive ratios of our algorithms degrade gracefully with multiplicative prediction errors.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com