Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andres Munoz Medina is active.

Publication


Featured researches published by Andres Munoz Medina.


algorithmic learning theory | 2012

New analysis and algorithm for learning with drifting distributions

Mehryar Mohri; Andres Munoz Medina

We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learning bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking scenario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the L1 distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be formulated as a simple QP. Finally, we report the results of preliminary experiments demonstrating the benefits of this algorithm.


knowledge discovery and data mining | 2015

Adaptation Algorithm and Theory Based on Generalized Discrepancy

Corinna Cortes; Mehryar Mohri; Andres Munoz Medina

We present a new algorithm for domain adaptation improving upon the discrepancy minimization algorithm (DM), which was previously shown to outperform a number of popular algorithms designed for this task. Unlike most previous approaches adopted for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, it uses a reweighting that depends on the hypothesis considered and is based on the minimization of a new measure of generalized discrepancy. We give a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also present a detailed theoretical analysis of its learning guarantees, which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon the DM algorithm in several tasks.


international world wide web conferences | 2018

Testing Incentive Compatibility in Display Ad Auctions

Sébastien Lahaie; Andres Munoz Medina; Balasubramanian Sivan; Sergei Vassilvitskii

Consider a buyer participating in a repeated auction, such as those prevalent in display advertising. How would she test whether the auction is incentive compatible? To bid effectively, she is interested in whether the auction is single-shot incentive compatible---a pure second-price auction, with fixed reserve price---and also dynamically incentive compatible---her bids are not used to set future reserve prices. In this work we develop tests based on simple bid perturbations that a buyer can use to answer these questions, with a focus on dynamic incentive compatibility. There are many potential A/B testing setups that one could use, but we find that many natural experimental designs are, in fact, flawed. For instance, we show that additive perturbations can lead to paradoxical results, where higher bids lead to lower optimal reserve prices. We precisely characterize this phenomenon and show that reserve prices are only guaranteed to be monotone for distributions satisfying the Monotone Hazard Rate (MHR) property. The experimenter must also decide how to split traffic to apply systematic perturbations. It is tempting to have this split be randomized, but we demonstrate empirically that unless the perturbations are aligned with the partitions used by the seller to compute reserve prices, the results are guaranteed to be inconclusive. We validate our results with experiments on real display auction data and show that a buyer can quantify both single-shot and dynamic incentive compatibility even under realistic conditions where only the cost of the impression is observed (as opposed to the exact reserve price). We analyze the cost of running such experiments, exposing trade-offs between test accuracy, cost, and underlying market dynamics.


conference on information and knowledge management | 2018

Online Learning for Non-Stationary A/B Tests

Andres Munoz Medina; Sergei Vassilvitskii; Dong Yin

The rollout of new versions of a feature in modern applications is a manual multi-stage process, as the feature is released to ever larger groups of users, while its performance is carefully monitored. This kind of A/B testing is ubiquitous, but suboptimal, as the monitoring requires heavy human intervention, is not guaranteed to capture consistent, but short-term fluctuations in performance, and is inefficient, as better versions take a long time to reach the full population. In this work we formulate this question as that of expert learning, and give a new algorithm Follow-The-Best-Interval, FTBI, that works in dynamic, non-stationary environments. Our approach is practical, simple, and efficient, and has rigorous guarantees on its performance. Finally, we perform a thorough evaluation on synthetic and real world datasets and show that our approach outperforms current state-of-the-art methods.


international conference on machine learning | 2014

Learning Theory and Algorithms for revenue optimization in second price auctions with reserve

Mehryar Mohri; Andres Munoz Medina


neural information processing systems | 2015

Revenue optimization against strategic buyers

Mehryar Mohri; Andres Munoz Medina


uncertainty in artificial intelligence | 2015

Non-parametric revenue optimization for generalized second price auctions

Mehryar Mohri; Andres Munoz Medina


neural information processing systems | 2017

Revenue Optimization with Approximate Bid Predictions

Andres Munoz Medina; Sergei Vassilvitskii


Journal of Machine Learning Research | 2016

Learning algorithms for second-price auctions with reserve

Mehryar Mohri; Andres Munoz Medina


international conference on machine learning | 2016

No-regret algorithms for heavy-tailed linear bandits

Andres Munoz Medina; Scott Yang

Collaboration


Dive into the Andres Munoz Medina's collaboration.

Top Co-Authors

Avatar

Mehryar Mohri

Courant Institute of Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong Yin

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge