Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Algorithms and Adaptivity Gaps for Stochastic $k$-TSP (1911.02506v1)

Published 6 Nov 2019 in cs.DS and cs.DM

Abstract: Given a metric $(V,d)$ and a $\textsf{root} \in V$, the classic $\textsf{$k$-TSP}$ problem is to find a tour originating at the $\textsf{root}$ of minimum length that visits at least $k$ nodes in $V$. In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of $\textsf{$k$-TSP}$. In Stoch-Reward $k$-TSP, originally defined by Ene-Nagarajan-Saket [ENS17], each vertex $v$ in the given metric $(V,d)$ contains a stochastic reward $R_v$. The goal is to adaptively find a tour of minimum expected length that collects at least reward $k$; here "adaptively" means our next decision may depend on previous outcomes. Ene et al. give an $O(\log k)$-approximation adaptive algorithm for this problem, and left open if there is an $O(1)$-approximation algorithm. We totally resolve their open question and even give an $O(1)$-approximation \emph{non-adaptive} algorithm for this problem. We also introduce and obtain similar results for the Stoch-Cost $k$-TSP problem. In this problem each vertex $v$ has a stochastic cost $C_v$, and the goal is to visit and select at least $k$ vertices to minimize the expected \emph{sum} of tour length and cost of selected vertices. This problem generalizes the Price of Information framework [Singla18] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: "repetitions" and "critical scaling". We show using Freedman's and Jogdeo-Samuels' inequalities that for our problems, if we truncate the random variables at an ideal threshold and repeat, then their expected values form a good surrogate. Unfortunately, this ideal threshold is adaptive as it depends on how far we are from achieving our target $k$, so we truncate at various different scales and identify a "critical" scale.

Citations (14)

Summary

We haven't generated a summary for this paper yet.