Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Sampling and Decision Making with Low Entropy (2111.13203v4)

Published 25 Nov 2021 in cs.DS

Abstract: Consider the problem: we are given $n$ boxes, labeled ${1,2,\ldots, n}$ by an adversary, each containing a single number chosen from an unknown distribution; these $n$ distributions are not necessarily identical. We are also given an integer $k \leq n$. We have to choose an order in which we will sequentially open these boxes, and each time we open the next box in this order, we learn the number in the box. Once we reject a number in a box, the box cannot be recalled. Our goal is to accept $k$ of these numbers, without necessarily opening all boxes, such that the accepted numbers are the $k$ largest numbers in the boxes (thus their sum is maximized). A natural approach to solve such problems is to use randomness to sample randomly ordered elements, however, as indicated in several sources, e.g., Turan et al. NIST'15, Bierhorst et al. Nature'18, pure randomness is hard to get in reality. We present an algorithm for this problem, which is provably and simultaneously near-optimal with respect to the achieved competitive ratio and the used amount of randomness. In particular, we construct a distribution on the orders with entropy $\Theta(\log\log n)$ such that a deterministic multiple-threshold algorithm gives a competitive ratio $1-O(\sqrt{\log k/k})$, for $k < \log n/\log \log n$. Our competitive ratio is simultaneously optimal and uses optimal entropy $\Theta(\log\log n)$, improving in three ways the previous best known algorithm, whose competitive ratio is $1 - O(1/k{1/3}) - o(1)$.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com