Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combinatorial Network Optimization with Unknown Variables: Multi-Armed Bandits with Linear Rewards (1011.4748v1)

Published 22 Nov 2010 in math.OC, cs.LG, cs.NI, and math.PR

Abstract: In the classic multi-armed bandits problem, the goal is to have a policy for dynamically operating arms that each yield stochastic rewards with unknown means. The key metric of interest is regret, defined as the gap between the expected total reward accumulated by an omniscient player that knows the reward means for each arm, and the expected total reward accumulated by the given policy. The policies presented in prior work have storage, computation and regret all growing linearly with the number of arms, which is not scalable when the number of arms is large. We consider in this work a broad class of multi-armed bandits with dependent arms that yield rewards as a linear combination of a set of unknown parameters. For this general framework, we present efficient policies that are shown to achieve regret that grows logarithmically with time, and polynomially in the number of unknown parameters (even though the number of dependent arms may grow exponentially). Furthermore, these policies only require storage that grows linearly in the number of unknown parameters. We show that this generalization is broadly applicable and useful for many interesting tasks in networks that can be formulated as tractable combinatorial optimization problems with linear objective functions, such as maximum weight matching, shortest path, and minimum spanning tree computations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yi Gai (10 papers)
  2. Bhaskar Krishnamachari (107 papers)
  3. Rahul Jain (152 papers)
Citations (252)

Summary

Combinatorial Network Optimization with Unknown Variables: Multi-Armed Bandits with Linear Rewards

The paper "Combinatorial Network Optimization with Unknown Variables: Multi-Armed Bandits with Linear Rewards" discusses a significant advancement in the field of learning algorithms, specifically addressing the scalability issues associated with the classic multi-armed bandit (MAB) problem. Unlike traditional approaches where each arm operates independently, this work introduces a framework catering to multi-armed bandits where arms are interdependent and rewards are obtained as linear combinations of unknown parameters.

In this extended formulation of MAB, arms correspond to various vector actions from a potentially vast set, with rewards determined by linear dependencies on underlying random variables. Prior strategies in MAB usually required storage, computational power, and regret that increased linearly with the number of arms. The exponential growth of arms, based on the number of dependent variables, rendered these approaches impractical for large-scale problems.

Central to this work is the Learning with Linear Rewards (LLR) policy, providing efficient algorithms with a regret bound that grows logarithmically over time and polynomially with the number of unknown parameters. This policy significantly enhances scalability by efficiently utilizing and updating information about dependent variables, rather than managing each arm independently. Consequently, storage requirements grow linearly only with the number of unknown parameters, substantially mitigating the exponential complexity challenge.

The authors substantiate the efficacy of their method by applying it to various network optimization problems, each formulable as combinatorial tasks with linear objective functions. Notable examples include maximum weight matching, shortest path, and minimum spanning tree computations. These demonstrate the policy's broad applicability across different domains, confirming its utility beyond theoretical conjecture.

Numerical results from network allocation strategies in cognitive radio networks highlight the policy's viability in real-world applications, showing notable improvement in regret performance over naïve methodologies. This LLR policy's architecture represents a powerful tool in optimizing tasks marked by vast decision spaces and linear dependencies, particularly in environments with stochastic elements.

The future implications of this work are multifold. The potential to extend these concepts to non-linear reward functions presents an intriguing prospect for further research. Moreover, deriving a lower bound on achievable regret for this general class of linear MAB problems remains an open question, inviting deeper theoretical exploration. Additionally, variations of these policies adapted for distributed and decentralized decision-making environments, such as distributed cognitive radio networks, indicate promising directions for innovative application.

In summary, this contribution addresses a critical gap in MAB problem-solving, offering scalable and efficient solutions for complex, network-based combinatorial tasks under uncertainty, securing its relevance in both theoretical frameworks and practical applications alike.