Aleksandrs Slivkins: publications

  1. Incentivizing exploration via information asymmetry, Aleksandrs Slivkins.
    XRDS: Crossroads, The ACM Magazine for Students, Vol. 24(1), Fall 2017. As self-interested individuals make decisions over time, they utilize information revealed by others in the past and produce information that may help others in the future. So how can we incentivize exploration for the sake of the common good?
  2. Multi-World Testing: A System for Experimentation, Learning, And Decision-Making (rev. Jul'16)
    Alekh Agarwal, Sarah Bird, Markus Cozowicz, Miro Dudik, Luong Hoang, John Langford, Lihong Li, Dan Melamed, Gal Oshri, Siddhartha Sen, Alex Slivkins. (The MWT project) Multi-World Testing (MWT) is a methodology for principled and efficient experimentation, learning, and decision-making. It is plausibly applicable to most services that interact with customers; in many scenarios, it is exponentially more efficient than the traditional A/B testing. The underlying research area is known as "contextual bandits" and "counterfactual evaluation".
  3. Online Decision Making in Crowdsourcing Markets: Theoretical Challenges
    Aleksandrs Slivkins and Jennifer Wortman Vaughan
    SIGecom Exchanges, Dec 2013. In crowdsourcing markets, task requesters and the platform itself make repeated decisions about which tasks to assign to each worker at which price. Designing algorithms for making these decisions is a rich, emerging problem space. We survey this problem space, point out significant modeling difficulties, and identify directions to make progress.
  4. Crowdsourcing Gold-HIT Creation at Scale: Challenges and Adaptive Exploration Approaches
    I. Abraham, O. Alonso, V. Kandylas, R. Patel, S. Shelford, A. Slivkins, H. Wu
    CrowdScale 2013: Workshop on Crowdsourcing at Scale Gold HITs --- Human Intelligence Tasks with known answers --- are commonly used to measure worker performance and data quality in industrial applications of crowdsourcing. We suggest adaptive exploration as a promising approach for automated, scalable Gold HIT creation. We substantiate this with initial experiments in a stylized model.
  1. Incentivizing Exploration with Unbiased Histories (2018-2019) [poster]
    Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, and Zhiwei Steven Wu The goal is to incentivize exploration in recommendation systems under weaker assumptions of rationality and trust. Our algorithm constructs a partial order on the users, which is published ahead of time, and provides each user with feedback from all preceding users in the order (and no other info). We achieve near-optimal learning performance for a range of behavioral models.
  2. Greedy Algorithm almost Dominates in Smoothed Contextual Bandits (2018)
    Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaighan and Zhiwei Steven Wu
    Under submission to a journal. Preliminary version of these (and other) results in COLT 2018. We consider the greedy algorithm in linear contextual bandits. We prove that it is optimal, in a very strong sense, if the problem instance is sufficiently "smoothed".
  3. Bayesian Exploration: Incentivizing Exploration in Bayesian Games (2016-2018) [slides]
    Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis and Steven Wu
    Preliminary version in EC 2016.
    To appear in Operations Research after a major revision. At each time step, multiple agents arrive, play a fixed Bayesian game, and leave forever. Agents' decisions reveal info that can help future agents, creating a tradeoff between exploration, exploitation, and agents' incentives. We design a social planner which learns over time and coordinates the agents towards socially desirable outcomes.
  4. SAYER : Counterfactual Evaluation of Systems (2018)
    Mathias Lecuyer, Mihir Nanavati, Junchen Jiang, Azadeh Mobasher, Alexandra Savelieva, Siddhartha Sen, Amit Sharma, Aleksandrs Slivkins
    Under submission. Available upon request.
  1. Bayesian Incentive-Compatible Bandit Exploration (rev. 2018) (slides)
    Yishay Mansour, Aleksandrs Slivkins and Vasilis Syrgkanis
    A deep revision of the paper in EC 2015.
    To appear in Operations Research. We design bandit algorithms that recommend actions to self-interested agents (who then decide which actions to take). By means of carefully designed information disclosure, we incentivize the agents to balance exploration and exploitation so as to maximize social welfare.
  2. Bandits and Experts in Metric Spaces (rev. 2018)
    Robert Kleinberg, Aleksandrs Slivkins and Eli Upfal.
    A merged and heavily revised version of papers in STOC'08 and SODA'10.
    J. of the ACM, Volume 66, Issue 4, May 2019. We introduce the 'Lipschitz bandits problem': a stochastic bandit problem, possibly with a very large set of arms, such that the expected payoffs obey a Lipschitz condition with respect to a given metric space. The goal is to minimize regret as a function of time, both in the worst case and for 'nice' problem instances.
  3. Multidimensional Dynamic Pricing for Welfare Maximization
    Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, and Steven Wu
    To appear in the Special Issue of ACM TEAC for EC 2017 (after a minor revision). We solve a dynamic pricing problem with d divisible goods. Buyers have IID valuations over purchased bundles. We optimize welfare (including seller's production costs) in #rounds polynomial in d and the accuracy parameter. Crucially, we make assumptions (concavity and Holder-continuity) on buyers' valuations, rather than on the aggregate response to prices.
  4. Bandits with knapsacks (rev. 2017)
    Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs Slivkins
    J. of the ACM, Vol. 65 Issue 3, March 2018.
    (Preliminary version in FOCS 2013.) We define a broad class of explore-exploit problems with knapsack-style resource utilization constraints, subsuming dynamic pricing, dynamic procurement, pay-per-click ad allocation, and many other problems. Our algorithms achieve optimal regret w.r.t. the optimal dynamic policy.
  5. Truthful Mechanisms with Implicit Payment Computation (Rev. Nov'15)
    Moshe Babaioff, Robert Kleinberg and Aleksandrs Slivkins
    J. of the ACM, Vol. 62, Issue 2, May 2015.
    (Preliminary version in EC 2010 and EC 2013)
    The latest revision reflects some minor bug fixes We show that payment computation does not present any obstacle in designing truthful mechanisms, even when we can only call the allocation rule once. Applying this to multi-armed bandits (MAB), we design truthful MAB mechanisms for stochastic payoffs. More generally, we open up a problem of designing monotone MAB allocation rules.
  6. Adaptive Contract Design for Crowdsourcing Markets:
    Bandit Algorithms for Repeated Principal-Agent Problems

    Chien-Ju Ho, Aleksandrs Slivkins and Jennifer Wortman Vaughan.
    JAIR: J. of Artificial Intelligence Research, Vol. 54, 2015. (Special Track on Human Computation)
    (Preliminary version in EC 2014) We consider a repeated version of the principal-agent model in which the principal can revise the contract over time, and the agent can strategically choose the (unobservable) effort level. We treat this as a multi-armed bandit problem, and design an algorithm that adaptively refines the partition of the action space without relying on Lipschitz assumptions.
  7. Selection and Influence in Cultural Dynamics
    David Kempe, Jon Kleinberg, Sigal Oren and Aleksandrs Slivkins
    Network Science, vol. 4(1), 2016.
    (Preliminary version in EC 2013) Influence and selection -- tendency to, resp., become similar to one's friends and to interact with similar people -- work in opposite directions: resp., towards homogeneity and fragmentation. We analyze the societal outcomes when both forces are in effect. We consider a natural class of models from the work in political opinion formation, cultural diversity, and language evolution.
  8. Low-distortion Inference of Latent Similarities from a Multiplex Social Network
    Ittai Abraham, Shiri Chechik, David Kempe and Aleksandrs Slivkins
    SICOMP: SIAM J. on Computing, Vol. 44(3), 2015.
    (Preliminary version in SODA 2013.) The observed social network is a noisy signal about the latent "social space": the ways in which individuals are (dis)similar to one another. We present near-linear time algorithms which, under some standard models, can infer the social space with provable guarantees.
  9. Dynamic pricing with limited supply (rev. Nov'13)
    Moshe Babaioff, Shaddin Dughmi, Robert Kleinberg and Aleksandrs Slivkins
    Special issue for EC 2012: ACM Trans. on Economics and Computation, 3(1): 4 (2015). We consider dynamic pricing with limited supply and unknown demand distribution. We extend multi-armed bandit techniques to the limited supply setting, and obtain optimal regret rates.
  10. Contextual bandits with similarity information
    JMLR: J. of Machine Learning Research, 15(Jul):2533-2568, 2014.
    (Preliminary version in COLT 2011.) We study contextual bandits with a known metric over context-arm pairs which upper-bounds the local change in rewards. The main algorithmic idea is to adapt the partitions of the metric space to frequent context arrivals and high-payoff regions. Our framework also handles slowly changing payoffs and variable sets of arms.
  11. Ranked bandits in metric spaces: learning optimally diverse rankings over large document collections (rev. Sep'12)
    Aleksandrs Slivkins, Filip Radlinski and Sreenivas Gollapudi
    JMLR: J. of Machine Learning Research, 14(Feb):399-436, 2013.
    (Preliminary version in ICML 2010.) We present a learning-to-rank framework for web search that incorporates similarity and correlation between documents and thus, unlike prior work, scales to large document collections.
  12. Triangulation and Embedding using Small Sets of Beacons
    Jon Kleinberg, Aleksandrs Slivkins and Tom Wexler.
    J. of the ACM, 56(6), Sept 2009.
    Significantly revised merge of papers from FOCS 2004 and SODA 2005. We consider metric embeddings and triangulation-based distance estimation in a distributed framework with low load on the participating nodes. Our results provide theoretical insight into the empirical success of several recent Internet-related projects.
  13. Characterizing Truthful Multi-Armed Bandit Mechanisms
    Moshe Babaioff, Yogeshwer Sharma and Aleksandrs Slivkins
    SICOMP: SIAM J. on Computing , Vol. 43, No. 1, pp. 194-230, 2014
    (Preliminary version in EC 2009.) We consider a natural strategic version of multi-armed bandits (MAB), motivated by pay-per-click auctions. We show that requiring an MAB algorithm to be incentive-compatible has striking consequences both for structure and regret.
  14. Metric Embeddings with Relaxed Guarantees
    T-H. Hubert Chan, Kedar Dhamdhere, Anupam Gupta, Jon Kleinberg and A. Slivkins
    SIAM J. on Computing, 38(6): 2303-2329, March 2009.
    Preliminary version in FOCS 2005. Given any x, any metric admits a low-dim embedding into Lp, p>=1 with disortion D(x) = O(log 1/x) on all but an x-fraction of edges. Moreover, any decomposable metric (e.g. any doubling metric) admits a low-dim embedding such that D(x) = O(log 1/x)^{1/p} for all x.
  15. Distance Estimation and Object Location via Rings of Neighbors
    Special issue of "Distributed Computing" for PODC 2005: Vol. 19, No. 4. (March 2007). We approach several problems on distance estimation and object location with a unified technique called ''rings of neighbors''. Using this technique on metrics of low doubling dimension, we obtain significant improvements for low-stretch routing schemes, searchable small-world networks, distance labeling, and triangulation-based distance estimation.
  16. Network Failure Detection and Graph Connectivity
    Jon Kleinberg, Mark Sandler and Aleksandrs Slivkins.
    SIAM J. on Computing, 38(4): 1330-1346, Aug 2008.
    (Preliminary version in SODA 2004.) We detect network partitions -- with strong provable guarantees -- using a small set of 'agents' placed randomly on nodes of the network. We parameterize our guarantees by edge- and node-connectivity of the underlying graph.
  17. Parameterized Tractability of Edge-Disjoint Paths on DAGs
    SIAM J. on Discrete Math, 24(1): 146-157, Feb 2010.
    (Preliminary version in ESA 2003.) We resolve a long-standing open question about k-edge-disjoint paths: we show that this problem is W[1]-hard on DAGs, hence unlikely to admit running time f(k)*poly(n). However, such running time can be achieved if the input+demands graph is almost Eulerian.
  18. Interleaving Schemes on Circulant Graphs with Two Offsets
    Aleksandrs Slivkins and Shuki Bruck.
    Discrete Mathematics 309(13): 4384-4398, July 2009.
    Undergraduate research project (1999-2000), tech report (2002). We construct optimal interleaving schemes on infinite circulant graphs with two offsets. Interleaving is used for error-correcting on a bursty noisy channel.
  1. Adversarial Bandits with Knapsacks [poster]
    FOCS 2019: IEEE Symp. on Foundations of Computer Science.
    Nicole Immorlica, Karthik A. Sankararaman, Aleksandrs Slivkins and Rob Schapire Bandits with Knapsacks is a broad class of explore-exploit problems with knapsack-style resource utilization constraints. While all prior work is for the stochastic version, we target the adversarial version and obtain an optimal solution. We build on a new, game-theoretic interpretation (and a simpler algorithm) for the stochastic version.
  2. The Perils of Exploration under Competition: A Computational Modeling Approach [poster]
    Guy Aridor, Kevin Liu, Aleksandrs Slivkins, Zhiwei Steven Wu
    EC 2019: ACM Symp. on Economics and Computation Many online platforms learn from interactions with users, and explore: make potentially suboptimal choices for the sake of acquiring new information. We study the interplay between exploration and competition for users. We run extensive numerical experiments in a stylized duopoly model, asking whether/when competition incentivizes better algorithms for exploration.
  3. Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting [poster]
    Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang
    COLT 2019: Conf. on Learning Theory. We study contextual bandits with an abstract policy class and continuous action space. We obtain two algorithms: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both algorithms "zoom in" on better regions of the actions space, with improved performance on "benign" problems.
  4. Bayesian Exploration with Heterogenous Agents [blog post]
    Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins and Zhiwei Steven Wu
    The Web Conference 2019 (with oral presentation) We incentivize exploration for heterogeneous users. We design near-optimal personalized recommendation policies for several versions of the model, depending on whether and when the user types are reported to the principal. We also investigate how the model choice and the user diversity affect the set of "explorable" actions.
  5. The Externalities of Exploration and How Data Diversity Helps Exploitation
    Manish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaighan and Zhiwei Steven Wu
    COLT 2018: Conf. on Learning Theory. We initiate the study of "externalities of exploration", with linear contextual bandits as a model. We show that the presence of one population group ("majority") can sometimes substantially reduce the rewards of another ("minority"), and no algorithm can avoid it. We also prove that the greedy algorithm is optimal, in a very strong sense, if the problem instance is sufficiently "smoothed".
  6. Combinatorial Semi-Bandits with Knapsacks
    Karthik Abinav Sankararaman and Aleksandrs Slivkins
    AISTATS 2018: Intl. Conf. on AI and Statistics (oral presentation, top 5% of submissions). We solve a common generalization of "combinatorial semi-bandits" and "bandits with knapsacks". That is, actions are subsets of "atoms", and the algorithm consumes some limited resources. For each atom the algorithm collects a reward and consumes some amount of each resource.
  7. Competing Bandits: Learning under Competition
    Yishay Mansour, Aleksandrs Slivkins, and Zhiwei Steven Wu
    ITCS 2018: Conf. on Innovations in Theoretical Computer Science Most modern systems strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information. We initiate a study of the interplay between exploration and competition---how such systems balance the exploration for learning and the competition for users.
  8. Harvesting Randomness to Optimize Distributed Systems
    Mathias Lecuyer, Joshua Lockerman, Lamont Nelson, Sid Sen, Amit Sharma, and Alex Slivkins
    HotNets 2017: ACM Workshop on Hot Topics in Networks Randomized decisions in cloud systems is a powerful resource for offline optimization. We show how to collect data from existing systems, without modifying them, to evaluate new policies, without deploying them.
  9. Multidimensional Dynamic Pricing for Welfare Maximization
    Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, and Steven Wu
    EC 2017: ACM Symp. on Economics and Computation (invited to the Special Issue). We solve a dynamic pricing problem with d divisible goods. Buyers have IID valuations over purchased bundles. We optimize welfare (including seller's production costs) in #rounds polynomial in d and the accuracy parameter. Crucially, we make assumptions (concavity and Holder-continuity) on buyers' valuations, rather than on the aggregate response to prices.
  10. A Polynomial Time Algorithm For Spatio-Temporal Security Games
    Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi HajiAghayi, and Aleksandrs Slivkins.
    EC 2017: ACM Symp. on Economics and Computation We study a practically important class of security games with targets and “patrols” moving on a real line. We compute the Nash equilibrium in time polynomial in the input size, and only polylogarithmic in the number of possible patrol locations (M). Prior work made substantial assumptions, e.g., a constant number of rounds, and had running times polynomial in M.
  11. Bayesian Exploration: Incentivizing Exploration in Bayesian Games (rev. 2018) (slides)
    Yishay Mansour, Aleksandrs Slivkins, Vasilis Syrgkanis and Steven Wu
    EC 2016. At each time step, multiple agents arrive, play a fixed Bayesian game, and leave forever. Agents' decisions reveal info that can help future agents, creating a tradeoff between exploration, exploitation, and agents' incentives. We design a social planner which learns over time and coordinates the agents towards socially desirable outcomes.
  12. How Many Workers to Ask? Adaptive Exploration for Collecting High Quality Labels
    Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford and Aleksandrs Slivkins
    SIGIR 2016: Intl. ACM SIGIR Conf. on Research and Development in Information Retrieval.
  13. Bayesian Incentive-Compatible Bandit Exploration (rev. 2018) (slides)
    Yishay Mansour, Aleksandrs Slivkins and Vasilis Syrgkanis
    EC 2015: ACM Symp. on Economics and Computation We design bandit algorithms that recommend actions to self-interested agents (who then decide which actions to take). By means of carefully designed information disclosure, we incentivize the agents to balance exploration and exploitation so as to maximize social welfare.
  14. Contextual Dueling Bandits
    Miroslav Dudík, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins and Masrour Zoghi
    COLT 2015: Conf. on Learning Theory. We extend "dueling bandits" (where feedback is limited to pairwise comparisons between arms) to incorporate contexts (as in "contextual bandits"). We propose a natural new solution concept, rooted in game theory, and present algorithms for approximately learning this concept.
  15. Incentivizing High Quality Crowdwork
    Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan
    WWW 2015: 24th Intl. World Wide Web Conference. Nominee for Best Paper Award.
    A talk at CODE@MIT 2015: Conf. on Digital Experimentation @MIT.
    Short version: SIGecom Exchanges, Dec 2015. We study causal effects of performance-based payments (PBPs) on the quality of crowdwork, via randomized behavioral experiments on Amazon Mechanical Turk. We shed light on when, where, and why PBPs help improve quality.
  16. Adaptive Contract Design for Crowdsourcing Markets:
    Bandit Algorithms for Repeated Principal-Agent Problems
    (rev. Sep'15)
    Chien-Ju Ho, Aleksandrs Slivkins and Jennifer Wortman Vaughan.
    EC 2014: ACM Symp. on Economics and Computation We consider a repeated version of the principal-agent model in which the principal can revise the contract over time, and the agent can strategically choose the (unobservable) effort level. We treat this as a multi-armed bandit problem, and design an algorithm that adaptively refines the partition of the action space without relying on Lipschitz assumptions.
  17. One Practical Algorithm for Both Stochastic and Adversarial Bandits
    Yevgeny Seldin and Aleksandrs Slivkins
    ICML 2014: Intl. Conf. on Machine Learning. We present a bandit algorithm that achieves near-optimal performance in both stochastic and adversarial regimes without prior knowledge about the environment. Our algorithm is both rigorous and practical; it is based on a new control lever that we reveal in the EXP3 algorithm.
  18. Robust Multi-objective Learning with Mentor Feedback
    Alekh Agarwal, Ashwinkumar Badanidiyuru, Miroslav Dudik, Robert E. Schapire, Aleksandrs Slivkins.
    COLT 2014: Conf. on Learning Theory. We study decision-making with multiple objectives. During the training phase, we observe the actions of an outside agent (“mentor”). In the test phase, our goal is to maximally improve upon the mentor’s (unobserved) actions across all objectives. We present an algorithm with near-optimal regret compared with the best possible improvement.
  19. Resourceful Contextual Bandits
    Ashwinkumar Badanidiyuru, John Langford and Aleksandrs Slivkins
    COLT 2014: Conf. on Learning Theory. Contextual bandits with resource constraints: we consider very general settings for both contextual bandits (arbitrary policy sets) and bandits with resource constraints (bandits with knapsacks), and obtain a regret guarantee with near-optimal statistical properties.
  20. Bandits with Knapsacks (rev. 2017)
    Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs Slivkins
    FOCS 2013: IEEE Symp. on Foundations of Computer Science. We define a broad class of explore-exploit problems with knapsack-style resource utilization constraints, subsuming dynamic pricing, dynamic procurement, pay-per-click ad allocation, and many other problems. Our algorithms achieve optimal regret w.r.t. the optimal dynamic policy.
  21. Multi-parameter Mechanisms with Implicit Payment Computation
    Moshe Babaioff, Robert Kleinberg and Aleksandrs Slivkins
    EC 2013: ACM Symp. on Electronic Commerce We show that payment computation does not present any obstacle in designing truthful mechanisms, even for multi-parameter domains, and even when we can only call the allocation rule once. Then we study a prominent example for a multi-parameter setting in which an allocation rule can only be called once, which arises in sponsored search auctions.
  22. Selection and Influence in Cultural Dynamics (rev. Oct'15)
    David Kempe, Jon Kleinberg, Sigal Oren and Aleksandrs Slivkins
    EC 2013: ACM Symp. on Electronic Commerce Influence and selection -- tendency to, resp., become similar to one's friends and to interact with similar people -- work in opposite directions: resp., towards homogeneity and fragmentation. We analyze the societal outcomes when both forces are in effect. We consider a natural class of models from the work in political opinion formation, cultural diversity, and language evolution.
  23. Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem [poster]
    Ittai Abraham, Omar Alonso, Vasilis Kandylas and Aleksandrs Slivkins
    COLT 2013: Conf. on Learning Theory. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks. We present several algorithms for this problem, and support them with analysis and simulations.
  24. Low-distortion Inference of Latent Similarities from a Multiplex Social Network (rev. Aug'14)
    Ittai Abraham, Shiri Chechik, David Kempe and Aleksandrs Slivkins
    SODA 2013: ACM-SIAM Symp. on Discrete Algorithms
    A short version has appeared in The observed social network is a noisy signal about the latent "social space": the ways in which individuals are (dis)similar to one another. We present near-linear time algorithms which, under some standard models, can infer the social space with provable guarantees.
  25. The best of both worlds: stochastic and adversarial bandits.
    Sébastien Bubeck and Aleksandrs Slivkins
    COLT 2012: Conf. on Learning Theory. We present a new bandit algorithm whose regret is optimal both for adversarial rewards and for stochastic rewards, achieving, resp., square-root regret and polylog regret. Adversarial rewards and stochastic rewards are the two main settings for (non-Bayesian) multi-armed bandits; prior work treats them separately, and does not attempt to jointly optimize for both.
  26. Dynamic pricing with limited supply (rev. Nov'13)
    Moshe Babaioff, Shaddin Dughmi, Robert Kleinberg and Aleksandrs Slivkins
    EC 2012: ACM Symp. on Electronic Commerce We consider dynamic pricing with limited supply and unknown demand distribution. We extend multi-armed bandit techniques to the limited supply setting, and obtain optimal regret rates.
  27. Multi-armed bandits on implicit metric spaces
    NIPS 2011: Conf. on Neural Information Processing Systems. Suppose an MAB algorithm is given a tree-based classification of arms. This tree implicitly defines a "similarity distance" between arms, but the numeric distances are not revealed to the algorithm. Our algorithm (almost) matches the best known guarantees for the setting (Lipschitz MAB) in which the distances are revealed.
  28. Contextual bandits with similarity information (rev. May'14)
    COLT 2011: Conf. on Learning Theory.
    JMLR: J. of Machine Learning Research, 15(Jul):2533-2568, 2014. We study contextual bandits with a known metric over context-arm pairs which upper-bounds the local change in rewards. The main algorithmic idea is to adapt the partitions of the metric space to frequent context arrivals and high-payoff regions. Our framework also handles slowly changing payoffs and variable sets of arms.
  29. Ranked bandits in metric spaces: learning optimally diverse rankings over large document collections (rev. Sep'12)
    Aleksandrs Slivkins, Filip Radlinski and Sreenivas Gollapudi
    ICML 2010: Intl. Conf. on Machine Learning.
    Preliminary version: NIPS 2009 Ranking Workshop. We present a learning-to-rank framework for web search that incorporates similarity and correlation between documents and thus, unlike prior work, scales to large document collections.
  30. Truthful Mechanisms with Implicit Payment Computation (rev. Jul'14)
    Moshe Babaioff, Robert Kleinberg and Aleksandrs Slivkins
    EC 2010: ACM Symp. on Electronic Commerce (Best paper award) We show that payment computation essentially does not present any obstacle in designing truthful mechanisms for single-parameter domains, even when we can only call the allocation rule once. Applying this to multi-armed bandits (MAB), we design truthful MAB mechanisms for stochastic payoffs. More generally, we open up a problem of designing monotone MAB allocation rules.
  31. Sharp Dichotomies for Regret Minimization in Metric Spaces
    Robert Kleinberg and Aleksandrs Slivkins
    SODA 2010: ACM-SIAM Symp. on Discrete Algorithms
    The original full version is superseded by this version (revised & merged with the STOC'08 paper). We further study multi-armed bandits in metric spaces, focusing on the connections between online learning and metric topology. The main result is that the worst-case regret is either O(log t) or at least sqrt{t}, depending (essentially) on whether the metric space is countable.
  32. Adapting to the Shifting Intent of Search Queries
    Umar Syed, Aleksandrs Slivkins and Nina Mishra
    NIPS'09: Annual Conf. on Neural Information Processing Systems Query intent may shift over time. A classifier can use the available signals to predict a shift in intent. Then a bandit algorithm can be used to find the new relevant results. We present a meta-algorithm that combines such classifier with a bandit algorithm in a feedback loop, with favorable regret guarantees.
  33. Characterizing Truthful Multi-Armed Bandit Mechanisms (revised June'13)
    Moshe Babaioff, Yogeshwer Sharma and Aleksandrs Slivkins
    EC 2009: ACM Symp. on Electronic Commerce We consider a natural strategic version of multi-armed bandits (MAB), motivated by pay-per-click auctions. We show that requiring an MAB algorithm to be incentive-compatible has striking consequences both for structure and regret.
  34. Adapting to a Changing Environment: the Brownian Restless Bandits
    Aleksandrs Slivkins and Eli Upfal.
    COLT 2008: Conf. on Learning Theory. We study a version of the stochastic multi-armed bandit problem in which the expected reward of each arm evolves stochastically and gradually in time, following an independent Brownian motion or a similar process. Our benchmark is a hypothetical policy that chooses the best arm in each round.
  35. Multi-armed Bandits in Metric Spaces
    Robert Kleinberg, Aleksandrs Slivkins and Eli Upfal.
    STOC 2008: ACM Symp. on Theory of Computing
    The original full version is superseded by this version (revised & merged with the SODA'10 paper). We introduce the 'Lipschitz bandits problem': a stochastic bandits problem, possibly with a very large set of arms, such that the expected payoffs obey a Lipschitz condition with respect to a given metric space. The goal is to minimize regret as a function of time, both in the worst case and for 'nice' problem instances.
  36. Towards Fast Decentralized Construction of Locality-Aware Overlay Networks
    PODC 2007: ACM Symp. on Principles of Distributed Computing [slides] We provide fast (polylog-time) distributed constructions for various locality-aware (low-stretch) distributed data structures, such as distance labeling schemes, name-independent routing schemes, and multicast trees.
  37. Oscillations with TCP-like Flow Control in Networks of Queues
    Matthew Andrews and Aleksandrs Slivkins
    INFOCOM 2006 IEEE Conf. on Computer Communications For a wide range of TCP-like fluid-based congestion control models, we construct a network of sessions and (almost) FIFO routers such that starting from a certain initial state, the system returns to the same state eventually. Contrasting the prior work, in our example the total sending rate of all sessions that come through any given router never exceeds its capacity.
  38. Metric Embeddings with Relaxed Guarantees
    T-H. Hubert Chan, Kedar Dhamdhere, Anupam Gupta, Jon Kleinberg and A. Slivkins
    FOCS 2005: IEEE Symp. on Foundations of Computer Science [slides] Given any x, any metric admits a low-dim embedding into Lp, p>=1 with disortion D(x) = O(log 1/x) on all but an x-fraction of edges. Moreover, any decomposable metric (e.g. any doubling metric) admits a low-dim embedding such that D(x) = O(log 1/x)^{1/p} for all x.
  39. Meridian: A Lightweight Network Location Service without Virtual Coordinates
    ACM SIGCOMM 2005
    Bernard Wong, Aleksandrs Slivkins and Emin G. Sirer. Meridian is a scalable overlay network for performing locality-aware node selection. The project features a live PlanetLab deployement.
  40. Distance Estimation and Object Location via Rings of Neighbors
    PODC 2005: ACM Symp. on Principles of Distributed Computing [slides]
    • Best Student Paper Award (eligibility: at least one student author)
    We approach several problems on distance estimation and object location with a unified technique called ''rings of neighbors''. Using this technique on metrics of low doubling dimension, we obtain significant improvements for low-stretch routing schemes, searchable small-world networks, distance labeling, and triangulation-based distance estimation.
  41. Distributed Approaches to Triangulation and Embedding
    SODA 2005: ACM-SIAM Symp. on Discrete Algorithms
    [recommended version: merged journal version of the FOCS'04 paper] Following up on the FOCS'04 paper, we consider metric embeddings and triangulation-based distance estimation in a distributed framework with low load on all participating nodes.
  42. Triangulation and Embedding using Small Sets of Beacons
    Jon Kleinberg, Aleksandrs Slivkins and Tom Wexler.
    FOCS 2004: The IEEE Symp. on Foundations of Computer Science We consider metric embeddings and triangulation-based distance estimation in a distributed framework where nodes measure distances only to a small set of beacons. Our results provide theoretical insight into the empirical success of several recent Internet-related projects.
  43. Network Failure Detection and Graph Connectivity
    Jon Kleinberg, Mark Sandler and Aleksandrs Slivkins.
    SODA 2004: The ACM-SIAM Symp. on Discrete Algorithms [slides] We detect network partitions -- with strong provable guarantees -- using a small set of 'agents' placed randomly on nodes of the network. We parameterize our guarantees by edge- and node-connectivity of the underlying graph.
  44. Parameterized Tractability of Edge-Disjoint Paths on DAGs
    ESA 2003: The European Symp. on Algorithms [slides] We resolve a long-standing open question about k-edge-disjoint paths: we show that this problem is W[1]-hard on DAGs, hence unlikely to admit running time f(k)*poly(n). However, such running time can be achieved if the input+demands graph is almost Eulerian.
  1. Making Contextual Decisions with Low Technical Debt (2016-2017)
    Alekh Agarwal, Sarah Bird, Markus Cozowicz, Luong Hoang, John Langford, Stephen Lee, Jiaji Li, Dan Melamed, Gal Oshri, Oswaldo Ribas, Siddhartha Sen, and Aleksandrs Slivkins Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Contextual bandit algorithms can be very effective in these settings, but applying them in practice is fraught with technical debt. We create the first general system for contextual bandit learning, called the Decision Service.
  2. Dynamic Ad Allocation: Bandits with Budgets (2013) This brief note is on dynamic allocation of pay-per-click ads with advertisers' budgets. We define and analyze a natural extension of UCB1 to per-arm budgets.
  3. Approximate Matching for Peer-to-Peer Overlays with Cubit
    Bernard Wong, Aleksandrs Slivkins and Emin G. Sirer Cubit is a system that provides fully decentralized approximate keyword search capabilities to a peer-to-peer network. You can use Cubit to find a movie, song or artist even if you misspell the title or the name.

By topic

Incentivizing exploration in recommendation systems

Competition, externalities, greedy algorithm

Truthful bandit mechanisms and implicit payment computation

Surveys and position papers

Conference and journal publications