Multi-Armed Bandits at MSR-SV

This page is inactive since the closure of Microsoft Research Silicon Valley in September 2014.

Multi-Armed Bandits

This is an umbrella project for several related efforts at Microsoft Research Silicon Valley that address various Multi-Armed Bandit (MAB) formulations motivated by web search and ad placement. The MAB problem is a classical paradigm in Machine Learning in which an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies.

The name "multi-armed bandits" comes from a whimsical scenario in which a gambler faces several slot machines, a.k.a. "one-armed bandits", that look identical at first but produce different expected winnings. The crucial issue here is the trade-off between acquiring new information (exploration) and capitalizing on the information available so far (exploitation). While the MAB problems have been studied extensively in Machine Learning, Operations Research and Economics, many exciting questions are open. One aspect that we are particularly interested in concerns modeling and efficiently using various types of side information that may be available to the algorithm.

Contact: Alex Slivkins.

Research directions

People at MSR-SVC
Ittai Abraham
Ittai Abraham

Moshe Babaioff
Moshe Babaioff

Sreenivas Gollapudi
Sreenivas Gollapudi

Nina Mishra
Nina Mishra

Rina Panigrahy
Rina Panigrahy

Alex Slivkins
Alex Slivkins

External visitors and collaborators

Prof. Sébastien Bubeck (Princeton)
Prof. Robert Kleinberg (Cornell)
Filip Radlinski (MSR Cambridge)
Prof. Eli Upfal (Brown)

Former interns
Yogi Sharma (Cornell —> Facebook; intern at MSR-SV in summer 2008)
Umar Syed (Princeton —> Google; intern at MSR-SV in summer 2008)
Shaddin Dughmi (Stanford —>USC; intern at MSR-SV in summer 2010) 
Ashwinkumar Badanidiyuru (Cornell --> Google; intern at MSR-SV in summer 2012)

MAB problems with similarity information

MAB problems in a changing environment

Explore-exploit tradeoff in mechanism design

Explore-exploit learning with limited resources

Risk vs. reward trade-off in MAB