Featured Researches

Theoretical Economics

Lexicographic Choice Under Variable Capacity Constraints

In several matching markets, in order to achieve diversity, agents' priorities are allowed to vary across an institution's available seats, and the institution is let to choose agents in a lexicographic fashion based on a predetermined ordering of the seats, called a (capacity-constrained) lexicographic choice rule. We provide a characterization of lexicographic choice rules and a characterization of deferred acceptance mechanisms that operate based on a lexicographic choice structure under variable capacity constraints. We discuss some implications for the Boston school choice system and show that our analysis can be helpful in applications to select among plausible choice rules.

Read more
Theoretical Economics

Liability Design with Information Acquisition

How to guarantee that firms perform due diligence before launching potentially dangerous products? We study the design of liability rules when (i) limited liability prevents firms from internalizing the full damage they may cause, (ii) penalties are paid only if damage occurs, regardless of the product's inherent riskiness, (iii) firms have private information about their products' riskiness before performing due diligence. We show that (i) any liability mechanism can be implemented by a tariff that depends only on the evidence acquired by the firm if a damage occurs, not on any initial report by the firm about its private information, (ii) firms that assign a higher prior to product riskiness always perform more due diligence but less than is socially optimal, and (iii) under a simple and intuitive condition, any type-specific launch thresholds can be implemented by a monotonic tariff.

Read more
Theoretical Economics

Liberalism, rationality, and Pareto optimality

Rational players in game theory are neoliberal in the sense that they can choose any available action so as to maximize their payoffs. It is well known that this can result in Pareto inferior outcomes (e.g. the Prisoner's Dilemma). Classical liberalism, in contrast, argues that people should be constrained by a no-harm principle (NHP) when they act. We show, for the first time to the best of our knowledge, that rational players constrained by the NHP will produce Pareto efficient outcomes in n-person non-cooperative games. We also show that both rationality and the NHP are required for this result.

Read more
Theoretical Economics

Limits to green growth and the dynamics of innovation

Central to the official "green growth" discourse is the conjecture that absolute decoupling can be achieved with certain market instruments. This paper evaluates this claim focusing on the role of technology, while changes in GDP composition are treated elsewhere. Some fundamental difficulties for absolute decoupling, referring specifically to thermodynamic costs, are identified through a stylized model based on empirical knowledge on innovation and learning. Normally, monetary costs decrease more slowly than production grows, and this is unlikely to change should monetary costs align with thermodynamic costs, except, potentially, in the transition after the price reform. Furthermore, thermodynamic efficiency must eventually saturate for physical reasons. While this model, as usual, introduces technological innovation just as a source of efficiency, innovation also creates challenges: therefore, attempts to sustain growth by ever-accelerating innovation collide also with the limited reaction capacity of people and institutions. Information technology could disrupt innovation dynamics in the future, permitting quicker gains in eco-efficiency, but only up to saturation and exacerbating the downsides of innovation. These observations suggest that long-term sustainability requires much deeper transformations than the green growth discourse presumes, exposing the need to rethink scales, tempos and institutions, in line with ecological economics and the degrowth literature.

Read more
Theoretical Economics

Lindahl Equilibrium as a Collective Choice Rule

A collective choice problem is a finite set of social alternatives and a finite set of economic agents with vNM utility functions. We associate a public goods economy with each collective choice problem and establish the existence and efficiency of (equal income) Lindahl equilibrium allocations. We interpret collective choice problems as cooperative bargaining problems and define a set-valued solution concept, {\it the equitable solution} (ES). We provide axioms that characterize ES and show that ES contains the Nash bargaining solution. Our main result shows that the set of ES payoffs is the same a the set of Lindahl equilibrium payoffs. We consider two applications: in the first, we show that in a large class of matching problems without transfers the set of Lindahl equilibrium payoffs is the same as the set of (equal income) Walrasian equilibrium payoffs. In our second application, we show that in any discrete exchange economy without transfers every Walrasian equilibrium payoff is a Lindahl equilibrium payoff of the corresponding collective choice market. Moreover, for any cooperative bargaining problem, it is possible to define a set of commodities so that the resulting economy's utility possibility set is that bargaining problem {\it and} the resulting economy's set of Walrasian equilibrium payoffs is the same as the set of Lindahl equilibrium payoffs of the corresponding collective choice market.

Read more
Theoretical Economics

Local Dominance

We define a local notion of dominance that speaks to the true choice problems among actions in a game tree and does not rely on global planning. When we do not restrict players' ability to do contingent reasoning, a reduced strategy is weakly dominant if and only if it prescribes a locally dominant action at every decision node, therefore any dynamic decomposition of a direct mechanism that preserves strategy-proofness is robust to the lack of global planning. Under a form of wishful thinking, we also show that strategy-proofness is robust to the lack of forward-planning. Moreover, from our local perspective, we can identify rough forms of contingent reasoning that are particularly natural. We construct a dynamic game that implements the Top Trading Cycles allocation under a minimal form of contingent reasoning, related to independence of irrelevant alternatives.

Read more
Theoretical Economics

Local Utility and Multivariate Risk Aversion

We revisit Machina's local utility as a tool to analyze attitudes to multivariate risks. We show that for non-expected utility maximizers choosing between multivariate prospects, aversion to multivariate mean preserving increases in risk is equivalent to the concavity of the local utility functions, thereby generalizing Machina's result in Machina (1982). To analyze comparative risk attitudes within the multivariate extension of rank dependent expected utility of Galichon and Henry (2011), we extend Quiggin's monotone mean and utility preserving increases in risk and show that the useful characterization given in Landsberger and Meilijson (1994) still holds in the multivariate case.

Read more
Theoretical Economics

Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute Selection Decisions

Attributes provide critical information about the alternatives that a decision-maker is considering. When their magnitudes are uncertain, the decision-maker may be unsure about which alternative is truly the best, so measuring the attributes may help the decision-maker make a better decision. This paper considers settings in which each measurement yields one sample of one attribute for one alternative. When given a fixed number of samples to collect, the decision-maker must determine which samples to obtain, make the measurements, update prior beliefs about the attribute magnitudes, and then select an alternative. This paper presents the sample allocation problem for multiple attribute selection decisions and proposes two sequential, lookahead procedures for the case in which discrete distributions are used to model the uncertain attribute magnitudes. The two procedures are similar but reflect different quality measures (and loss functions), which motivate different decision rules: (1) select the alternative with the greatest expected utility and (2) select the alternative that is most likely to be the truly best alternative. We conducted a simulation study to evaluate the performance of the sequential procedures and hybrid procedures that first allocate some samples using a uniform allocation procedure and then use the sequential, lookahead procedure. The results indicate that the hybrid procedures are effective; allocating many (but not all) of the initial samples with the uniform allocation procedure not only reduces overall computational effort but also selects alternatives that have lower average opportunity cost and are more often truly best.

Read more
Theoretical Economics

M Equilibrium: A theory of beliefs and choices in games

We introduce a set-valued solution concept, M equilibrium, to capture empirical regularities from over half a century of game-theory experiments. We show M equilibrium serves as a meta theory for various models that hitherto were considered unrelated. M equilibrium is empirically robust and, despite being set-valued, falsifiable. We report results from a series of experiments comparing M equilibrium to leading behavioral-game-theory models and demonstrate its virtues in predicting observed choices and stated beliefs. Data from experimental games with a unique pure-strategy Nash equilibrium and multiple M equilibria exhibit coordination problems that could not be anticipated through the lens of existing models.

Read more
Theoretical Economics

Machine Learning for Strategic Inference

We study interactions between strategic players and markets whose behavior is guided by an algorithm. Algorithms use data from prior interactions and a limited set of decision rules to prescribe actions. While as-if rational play need not emerge if the algorithm is constrained, it is possible to guide behavior across a rich set of possible environments using limited details. Provided a condition known as weak learnability holds, Adaptive Boosting algorithms can be specified to induce behavior that is (approximately) as-if rational. Our analysis provides a statistical perspective on the study of endogenous model misspecification.

Read more

Ready to get started?

Join us today