Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where H. Brendan McMahan is active.

Publication


Featured researches published by H. Brendan McMahan.


knowledge discovery and data mining | 2013

Ad click prediction: a view from the trenches

H. Brendan McMahan; Gary Holt; D. Sculley; Michael Young; Dietmar Ebner; Julian Paul Grady; Lan Nie; Todd Phillips; Eugene Davydov; Daniel Golovin; Sharat Chikkerur; Dan Liu; Martin Wattenberg; Arnar Mar Hrafnkelsson; Tom Boulos; Jeremy Kubica

Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates. We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.


computer and communications security | 2016

Deep Learning with Differential Privacy

Martín Abadi; Andy Chu; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Kunal Talwar; Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.


conference on learning theory | 2004

Online Geometric Optimization in the Bandit Setting Against an Adaptive Adversary

H. Brendan McMahan; Avrim Blum

We give an algorithm for the bandit version of a very general online optimization problem considered by Kalai and Vempala [1], for the case of an adaptive adversary. In this problem we are given a bounded set S ⊆ ℝ n of feasible points. At each time step t, the online algorithm must select a point x t ∈ S while simultaneously an adversary selects a cost vector C t ∈ ℝ n . The algorithm then incurs cost c t .x t . Kalai and Vempala show that even if S is exponentially large (or infinite), so long as we have an efficient algorithm for the offline problem (given c ∈ ℝ n , find x ∈ S to minimize c.x) and so long as the cost vectors are bounded, one can efficiently solve the online problem of performing nearly as well as the best fixed x∈ S in hindsight. The Kalai-Vempala algorithm assumes that the cost vectors c t are given to the algorithm after each time step. In the “bandit” version of the problem, the algorithm only observes its cost, c t .x t . Awerbuch and Kleinberg [2] give an algorithm for the bandit version for the case of an oblivious adversary, and an algorithm that works against an adaptive adversary for the special case of the shortest path problem. They leave open the problem of handling an adaptive adversary in the general case. In this paper, we solve this open problem, giving a simple online algorithm for the bandit problem in the general case in the presence of an adaptive adversary. Ignoring a (polynomial) dependence on n, we achieve a regret bound of \(\mathcal{O}(T^{\frac{3}{4}}\sqrt{ln(T)}))\).


international conference on machine learning | 2005

Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees

H. Brendan McMahan; Maxim Likhachev; Geoffrey J. Gordon

MDPs are an attractive formalization for planning, but realistic problems often have intractably large state spaces. When we only need a partial policy to get from a fixed start state to a goal, restricting computation to states relevant to this task can make much larger problems tractable. We introduce a new algorithm, Bounded RTDP, which can produce partial policies with strong performance guarantees while only touching a fraction of the state space, even on problems where other algorithms would have to visit the full state space. To do so, Bounded RTDP maintains both upper and lower bounds on the optimal value function. The performance of Bounded RTDP is greatly aided by the introduction of a new technique to efficiently find suitable upper bounds; this technique can also be used to provide informed initialization to a wide range of other planning algorithms.


computer and communications security | 2017

Practical Secure Aggregation for Privacy-Preserving Machine Learning

Keith Allen Bonawitz; Vladimir Ivanov; Ben Kreuter; Antonio Marcedone; H. Brendan McMahan; Sarvar Patel; Daniel Ramage; Aaron Segal; Karn Seth

We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each users individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers


Discrete Applied Mathematics | 2004

Multi-source spanning trees: algorithms for minimizing source eccentricities

H. Brendan McMahan; Andrzej Proskurowski

1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.


ieee computer security foundations symposium | 2017

On the Protection of Private Information in Machine Learning Systems: Two Recent Approches

Martín Abadi; Úlfar Erlingsson; Ian J. Goodfellow; H. Brendan McMahan; Ilya Mironov; Nicolas Papernot; Kunal Talwar; Li Zhang

We present two efficient algorithms constructing a spanning tree with minimum eccentricity of a source, for a given graph with weighted edges and a set of source vertices. The first algorithm is both simpler to implement and faster of the two. The second approach involves enumerating single-source shortest-path spanning trees for all points on a graph, a technique that may be useful in solving other problems.


international conference on machine learning | 2007

Efficiently computing minimax expected-size confidence regions

Brent Bryan; H. Brendan McMahan; Chad M. Schafer; Jeff G. Schneider

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.


symposium on discrete algorithms | 2005

Online convex optimization in the bandit setting: gradient descent without a gradient

Abraham D. Flaxman; Adam Tauman Kalai; H. Brendan McMahan

Given observed data and a collection of parameterized candidate models, a 1 -- α confidence region in parameter space provides useful insight as to those models which are a good fit to the data, all while keeping the probability of incorrect exclusion below α. With complex models, optimally precise procedures (those with small expected size) are, in practice, difficult to derive; one solution is the Minimax Expected-Size (MES) confidence procedure. The key computational problem of MES is computing a minimax equilibria to a certain zero-sum game. We show that this game is convex with bilinear payoffs, allowing us to apply any convex game solver, including linear programming. Exploiting the sparsity of the matrix, along with using fast linear programming software, allows us to compute approximate minimax expected-size confidence regions orders of magnitude faster than previously published methods. We test these approaches by estimating parameters for a cosmological model.


Journal of Machine Learning Research | 2008

Robust Submodular Observation Selection

Andreas Krause; H. Brendan McMahan; Carlos Guestrin; Anupam Gupta

Collaboration


Dive into the H. Brendan McMahan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge