Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brendan Lucier is active.

Publication


Featured researches published by Brendan Lucier.


symposium on the theory of computing | 2010

Bayesian algorithmic mechanism design

Jason D. Hartline; Brendan Lucier

The principal problem in algorithmic mechanism design is in merging the incentive constraints imposed by selfish behavior with the algorithmic constraints imposed by computational intractability. This field is motivated by the observation that the preeminent approach for designing incentive compatible mechanisms, namely that of Vickrey, Clarke, and Groves; and the central approach for circumventing computational obstacles, that of approximation algorithms, are fundamentally incompatible: natural applications of the VCG approach to an approximation algorithm fails to yield an incentive compatible mechanism. We consider relaxing the desideratum of (ex post) incentive compatibility (IC) to Bayesian incentive compatibility (BIC), where truthtelling is a Bayes-Nash equilibrium (the standard notion of incentive compatibility in economics). For welfare maximization in single-parameter agent settings, we give a general black-box reduction that turns any approximation algorithm into a Bayesian incentive compatible mechanism with essentially the same approximation factor.


symposium on the theory of computing | 2013

Simultaneous auctions are (almost) efficient

Michal Feldman; Hu Fu; Nick Gravin; Brendan Lucier

Simultaneous item auctions are simple and practical procedures for allocating items to bidders with potentially complex preferences. In a simultaneous auction, every bidder submits independent bids on all items simultaneously. The allocation and prices are then resolved for each item separately, based solely on the bids submitted on that item. We study the efficiency of Bayes-Nash equilibrium (BNE) outcomes of simultaneous first- and second-price auctions when bidders have complement-free (a.k.a. subadditive) valuations. While it is known that the social welfare of every pure Nash equilibrium (NE) constitutes a constant fraction of the optimal social welfare, a pure NE rarely exists, and moreover, the full information assumption is often unrealistic. Therefore, quantifying the welfare loss in Bayes-Nash equilibria is of particular interest. Previous work established a logarithmic bound on the ratio between the social welfare of a BNE and the expected optimal social welfare in both first-price auctions (Hassidim et al., 2011) and second-price auctions (Bhawalkar and Roughgarden, 2011), leaving a large gap between a constant and a logarithmic ratio. We introduce a new proof technique and use it to resolve both of these gaps in a unified way. Specifically, we show that the expected social welfare of any BNE is at least 1/2 of the optimal social welfare in the case of first-price auctions, and at least 1/4 in the case of second-price auctions.


foundations of computer science | 2014

A Simple and Approximately Optimal Mechanism for an Additive Buyer

Moshe Babaioff; Nicole Immorlica; Brendan Lucier; S. Matthew Weinberg

We consider a monopolist seller with n heterogeneousitems, facing a single buyer. The buyer hasa value for each item drawn independently according to(non-identical) distributions, and his value for a set ofitems is additive. The seller aims to maximize his revenue.It is known that an optimal mechanism in this setting maybe quite complex, requiring randomization [19] and menusof infinite size [15]. Hart and Nisan [17] have initiated astudy of two very simple pricing schemes for this setting:item pricing, in which each item is priced at its monopolyreserve; and bundle pricing, in which the entire set ofitems is priced and sold as one bundle. Hart and Nisan [17]have shown that neither scheme can guarantee more thana vanishingly small fraction of the optimal revenue. Insharp contrast, we show that for any distributions, thebetter of item and bundle pricing is a constant-factorapproximation to the optimal revenue. We further discussextensions to multiple buyers and to valuations that arecorrelated across items.


electronic commerce | 2011

GSP auctions with correlated types

Brendan Lucier; Renato Paes Leme

The Generalized Second Price (GSP) auction is the primary method by which sponsered search advertisements are sold. We study the performance of this auction in the Bayesian setting for players with correlated types. Correlation arises very naturally in the context of sponsored search auctions, especially as a result of uncertainty inherent in the behaviour of the underlying ad allocation algorithm. We demonstrate that the Bayesian Price of Anarchy of the GSP auction is bounded by


international world wide web conferences | 2012

On revenue in the generalized second price auction

Brendan Lucier; Renato Paes Leme; Éva Tardos

4


workshop on internet and network economics | 2012

The power of local information in social networks

Christian Borgs; Michael Brautbar; Jennifer T. Chayes; Sanjeev Khanna; Brendan Lucier

, even when agents have arbitrarily correlated types. Our proof highlights a connection between the GSP mechanism and the concept of smoothness in games, which may be of independent interest. For the special case of uncorrelated (i.e. independent) agent types, we improve our bound to 2(1-1/e)-1 ≅ 3.16, significantly improving upon previously known bounds. Using our techniques, we obtain the same bound on the performance of GSP at coarse correlated equilibria, which captures (for example) a repeated-auction setting in which agents apply regret-minimizing bidding strategies. Moreoever, our analysis is robust against the presence of irrational bidders and settings of asymmetric information, and our bounds degrade gracefully when agents apply strategies that form only an approximate equilibrium.


Journal of Economic Theory | 2015

Bounding the inefficiency of outcomes in generalized second price auctions

Ioannis Caragiannis; Christos Kaklamanis; Panagiotis Kanellopoulos; Maria Kyropoulou; Brendan Lucier; Renato Paes Leme; Éva Tardos

The Generalized Second Price (GSP) auction is the primary auction used for selling sponsored search advertisements. In this paper we consider the revenue of this auction at equilibrium. We prove that if agent values are drawn from identical regular distributions, then the GSP auction paired with an appropriate reserve price generates a constant fraction (1/6th) of the optimal revenue. In the full-information game, we show that at any Nash equilibrium of the GSP auction obtains at least half of the revenue of the VCG mechanism excluding the payment of a single participant. This bound holds also with any reserve price, and is tight. Finally, we consider the tradeoff between maximizing revenue and social welfare. We introduce a natural convexity assumption on the click-through rates and show that it implies that the revenue-maximizing equilibrium of GSP in the full information model will necessarily be envy-free. In particular, it is always possible to maximize revenue and social welfare simultaneously when click-through rates are convex. Without this convexity assumption, however, we demonstrate that revenue may be maximized at a non-envy-free equilibrium that generates a socially inefficient allocation.


symposium on the theory of computing | 2011

Dueling algorithms

Nicole Immorlica; Adam Tauman Kalai; Brendan Lucier; Ankur Moitra; Andrew Postlewaite; Moshe Tennenholtz

We study the power of local information algorithms for optimization problems on social and technological networks. We focus on sequential algorithms where the network topology is initially unknown and is revealed only within a local neighborhood of vertices that have been irrevocably added to the output set. This framework models the behavior of an external agent that does not have direct access to the network data, such as a user interacting with an online social network. We study a range of problems under this model of algorithms with local information. When the underlying graph is a preferential attachment network, we show that one can find the root (i.e. initial node) in a polylogarithmic number of steps, using a local algorithm that repeatedly queries the visible node of maximum degree. This addresses an open question of Bollobas and Riordan. This result is motivated by its implications: we obtain polylogarithmic approximations to problems such as finding the smallest subgraph that connects a subset of nodes, finding the highest-degree nodes, and finding a subgraph that maximizes vertex coverage per subgraph size. Motivated by problems faced by recruiters in online networks, we also consider network coverage problems on arbitrary graphs. We demonstrate a sharp threshold on the level of visibility required: at a certain visibility level it is possible to design algorithms that nearly match the best approximation possible even with full access to the graph structure, but with any less information it is impossible to achieve a non-trivial approximation. We conclude that a network providers decision of how much structure to make visible to its users can have a significant effect on a users ability to interact strategically with the network.


symposium on the theory of computing | 2016

The price of anarchy in large games

Michal Feldman; Nicole Immorlica; Brendan Lucier; Tim Roughgarden; Vasilis Syrgkanis

The Generalized Second Price (GSP) auction is the primary auction used for monetizing the use of the Internet. It is well-known that truthtelling is not a dominant strategy in this auction and that inefficient equilibria can arise. Edelman et al. (2007) [11] and Varian (2007) [36] show that an efficient equilibrium always exists in the full information setting. Their results, however, do not extend to the case with uncertainty, where efficient equilibria might not exist.


knowledge discovery and data mining | 2015

Influence at Scale: Distributed Computation of Complex Contagion in Networks

Brendan Lucier; Joel Oren; Yaron Singer

We revisit classic algorithmic search and optimization problems from the perspective of competition. Rather than a single optimizer minimizing expected cost, we consider a zero-sum game in which a search problem is presented to two players, whose only goal is to outperform the opponent. Such games are typically exponentially large zero-sum games, but they often have a rich structure. We provide general techniques by which such structure can be leveraged to find minmax-optimal and approximate minmax-optimal strategies. We give examples of ranking, hiring, compression, and binary search duels, among others. We give bounds on how often one can beat the classic optimization algorithms in such duels.

Collaboration


Dive into the Brendan Lucier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nick Gravin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joel Oren

University of Toronto

View shared research outputs
Top Co-Authors

Avatar

Hu Fu

Cornell University

View shared research outputs
Researchain Logo
Decentralizing Knowledge