Combinatorial Network Optimization With Unknown Variables: Multi-Armed Bandits With Linear Rewards

In the classic multi-armed bandits problem, the goal is to have a policy for dynamically operating arms that each yield stochastic rewards with unknown means. The key metric of interest is regret, defined as the gap between the expected total reward accumulated by an omniscient player that knows the reward means for each arm, and the expected total reward accumulated by the given policy. The policies presented in prior work have storage, computation and regret all growing linearly with the number of arms, which is not scalable when the number of arms is large.

Provided by: University of Southern California Topic: Networking Date Added: Dec 2010 Format: PDF

Find By Topic