Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moshe Hoffman is active.

Publication


Featured researches published by Moshe Hoffman.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Powering up with indirect reciprocity in a large-scale field experiment

Erez Yoeli; Moshe Hoffman; David G. Rand; Martin A. Nowak

A defining aspect of human cooperation is the use of sophisticated indirect reciprocity. We observe others, talk about others, and act accordingly. We help those who help others, and we cooperate expecting that others will cooperate in return. Indirect reciprocity is based on reputation, which spreads by communication. A crucial aspect of indirect reciprocity is observability: reputation effects can support cooperation as long as peoples’ actions can be observed by others. In evolutionary models of indirect reciprocity, natural selection favors cooperation when observability is sufficiently high. Complimenting this theoretical work are experiments where observability promotes cooperation among small groups playing games in the laboratory. Until now, however, there has been little evidence of observability’s power to promote large-scale cooperation in real world settings. Here we provide such evidence using a field study involving 2413 subjects. We collaborated with a utility company to study participation in a program designed to prevent blackouts. We show that observability triples participation in this public goods game. The effect is over four times larger than offering a


Nature | 2016

Third-party punishment as a costly signal of trustworthiness

Jillian J. Jordan; Moshe Hoffman; Paul Bloom; David G. Rand

25 monetary incentive, the company’s previous policy. Furthermore, as predicted by indirect reciprocity, we provide evidence that reputational concerns are driving our observability effect. In sum, we show how indirect reciprocity can be harnessed to increase cooperation in a relevant, real-world public goods game.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Cooperate without looking: Why we care what people think and not just what they do

Moshe Hoffman; Erez Yoeli; Martin A. Nowak

Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Uncalculating Cooperation Is Used to Signal Trustworthiness

Jillian J. Jordan; Moshe Hoffman; Martin A. Nowak; David G. Rand

Significance Why do we trust people more when they do good without considering in detail the cost to themselves? People who avoid “looking” at the costs of good acts can be trusted to cooperate in important situations, whereas those who look cannot. We find that evolutionary dynamics can lead to cooperation without looking at costs. Our results illuminate why we attend closely to people’s motivations for doing good, as prescribed by deontological ethicists such as Kant, and, also, why we admire principled people, adhere to taboos, and fall in love. Evolutionary game theory typically focuses on actions but ignores motives. Here, we introduce a model that takes into account the motive behind the action. A crucial question is why do we trust people more who cooperate without calculating the costs? We propose a game theory model to explain this phenomenon. One player has the option to “look” at the costs of cooperation, and the other player chooses whether to continue the interaction. If it is occasionally very costly for player 1 to cooperate, but defection is harmful for player 2, then cooperation without looking is a subgame perfect equilibrium. This behavior also emerges in population-based processes of learning or evolution. Our theory illuminates a number of key phenomena of human interactions: authentic altruism, why people cooperate intuitively, one-shot cooperation, why friends do not keep track of favors, why we admire principled people, Kant’s second formulation of the Categorical Imperative, taboos, and love.


Policy insights from the behavioral and brain sciences | 2014

Harnessing Reciprocity to Promote Cooperation and the Provisioning of Public Goods

David G. Rand; Erez Yoeli; Moshe Hoffman

Significance Human prosociality presents an evolutionary puzzle, and reciprocity has emerged as a dominant explanation: cooperating today can bring benefits tomorrow. Reciprocity theories clearly predict that people should only cooperate when the benefits outweigh the costs, and thus that the decision to cooperate should always depend on a cost–benefit analysis. Yet human cooperation can be very uncalculating: good friends grant favors without asking questions, romantic love “blinds” us to the costs of devotion, and ethical principles make universal moral prescriptions. Here, we provide the first evidence, to our knowledge, that reputation effects drive uncalculating cooperation. We demonstrate, using economic game experiments, that people engage in uncalculating cooperation to signal that they can be relied upon to cooperate in the future. Humans frequently cooperate without carefully weighing the costs and benefits. As a result, people may wind up cooperating when it is not worthwhile to do so. Why risk making costly mistakes? Here, we present experimental evidence that reputation concerns provide an answer: people cooperate in an uncalculating way to signal their trustworthiness to observers. We present two economic game experiments in which uncalculating versus calculating decision-making is operationalized by either a subject’s choice of whether to reveal the precise costs of cooperating (Exp. 1) or the time a subject spends considering these costs (Exp. 2). In both experiments, we find that participants are more likely to engage in uncalculating cooperation when their decision-making process is observable to others. Furthermore, we confirm that people who engage in uncalculating cooperation are perceived as, and actually are, more trustworthy than people who cooperate in a calculating way. Taken together, these data provide the first empirical evidence, to our knowledge, that uncalculating cooperation is used to signal trustworthiness, and is not merely an efficient decision-making strategy that reduces cognitive costs. Our results thus help to explain a range of puzzling behaviors, such as extreme altruism, the use of ethical principles, and romantic love.


Games | 2015

Cooperate without Looking in a Non-Repeated Game

Christian Hilbe; Moshe Hoffman; Martin A. Nowak

How can we maximize the common good? This is a central organizing question of public policy design, across political parties and ideologies. The answer typically involves the provisioning of public goods such as fresh air, national defense, and knowledge. Public goods are costly to produce but benefit everyone, thus creating a social dilemma: Individual and collective interests are in tension. Although individuals may want a public good to be produced, they typically would prefer not to be the ones who have to pay for it. Understanding how to motivate individuals to pay these costs is therefore of great importance for policy makers. Research provides advice on how to promote this type of “cooperative” behavior. Synthesizing a large body of research demonstrates the power of “reciprocity” for inducing cooperation: When others know that you have helped them, or acted to benefit the greater good, they are often more likely to reciprocate and help you in turn. Several conclusions stem from this line of thinking: People will be more likely to do their part when their actions are observable by others; people will pay more attention to how effective those actions are when efficacy is also observable; people will try to avoid situations where they could help, but often will help if asked directly; people are more likely to cooperate if they think others are also cooperating; and people can develop habits of cooperation that shape their default inclinations.


Scientific Reports | 2015

An experimental investigation of evolutionary dynamics in the Rock-Paper-Scissors game

Moshe Hoffman; Sigrid Suetens; Uri Gneezy; Martin A. Nowak

We propose a simple model for why we have more trust in people who cooperate without calculating the associated costs. Intuitively, by not looking at the payoffs, people indicate that they will not be swayed by high temptations to defect, which makes them more attractive as interaction partners. We capture this intuition using a simple four-stage game. In the first stage, nature draws the costs and benefits of cooperation according to a commonly-known distribution. In the second stage, Player 1 chooses whether or not to look at the realized payoffs. In the third stage, Player 2 decides whether to exit or let Player 1 choose whether or not to cooperate in the fourth stage. Using backward induction, we provide a complete characterization for when we expect Player 1 to cooperate without looking. Moreover, we show with numerical simulations how cooperating without looking can emerge through simple evolutionary processes.


Archive | 2016

Game Theory and Morality

Moshe Hoffman; Erez Yoeli; Carlos David Navarrete

Game theory describes social behaviors in humans and other biological organisms. By far, the most powerful tool available to game theorists is the concept of a Nash Equilibrium (NE), which is motivated by perfect rationality. NE specifies a strategy for everyone, such that no one would benefit by deviating unilaterally from his/her strategy. Another powerful tool available to game theorists are evolutionary dynamics (ED). Motivated by evolutionary and learning processes, ED specify changes in strategies over time in a population, such that more successful strategies typically become more frequent. A simple game that illustrates interesting ED is the generalized Rock-Paper-Scissors (RPS) game. The RPS game extends the childrens game to situations where winning or losing can matter more or less relative to tying. Here we investigate experimentally three RPS games, where the NE is always to randomize with equal probability, but the evolutionary stability of this strategy changes. Consistent with the prediction of ED we find that aggregate behavior is far away from NE when it is evolutionarily unstable. Our findings add to the growing literature that demonstrates the predictive validity of ED in large-scale incentivized laboratory experiments with human subjects.


Nature Human Behaviour | 2018

The signal-burying game can explain why we obscure positive traits and good deeds

Moshe Hoffman; Christian Hilbe; Martin A. Nowak

Why do we think it’s wrong to treat people merely as a means to end? Why do we consider lies of omission less immoral than lies of commission? Why do we consider it good to give, regardless of whether the gift is effective? We use four simple game theoretic models—the Coordination Game, the Hawk–Dove game, Repeated Prisoner’s Dilemma, and the Envelope Game—to shed light on these and other puzzling aspects of human morality. We also justify the use of game theory for the study of morality and explore implications for group selection and moral realism.


Evolution and Human Behavior | 2008

Testosterone and financial risk preferences

Coren L. Apicella; Anna Dreber; Benjamin C. Campbell; Peter B. Gray; Moshe Hoffman; Anthony C. Little

People sometimes make their admirable deeds and accomplishments hard to spot, such as by giving anonymously or avoiding bragging. Such ‘buried’ signals are hard to reconcile with standard models of signalling or indirect reciprocity, which motivate costly pro-social behaviour by reputational gains. To explain these phenomena, we design a simple game theory model, which we call the signal-burying game. This game has the feature that senders can bury their signal by deliberately reducing the probability of the signal being observed. If the signal is observed, however, it is identified as having been buried. We show under which conditions buried signals can be maintained, using static equilibrium concepts and calculations of the evolutionary dynamics. We apply our analysis to shed light on a number of otherwise puzzling social phenomena, including modesty, anonymous donations, subtlety in art and fashion, and overeagerness.Nowak and colleagues present a game theoretic model that explains how behaviours like subtlety, modesty and anonymous good deeds can be maintained under the standard model of reputation building and indirect reciprocity.

Collaboration


Dive into the Moshe Hoffman's collaboration.

Top Co-Authors

Avatar

Uri Gneezy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin C. Campbell

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Coren L. Apicella

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge