Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David G. Rand is active.

Publication


Featured researches published by David G. Rand.


Nature | 2012

Spontaneous giving and calculated greed

David G. Rand; Joshua D. Greene; Martin A. Nowak

Cooperation is central to human social behaviour. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.


Nature | 2008

Winners don’t punish

Anna Dreber; David G. Rand; Drew Fudenberg; Martin A. Nowak

A key aspect of human behaviour is cooperation. We tend to help others even if costs are involved. We are more likely to help when the costs are small and the benefits for the other person significant. Cooperation leads to a tension between what is best for the individual and what is best for the group. A group does better if everyone cooperates, but each individual is tempted to defect. Recently there has been much interest in exploring the effect of costly punishment on human cooperation. Costly punishment means paying a cost for another individual to incur a cost. It has been suggested that costly punishment promotes cooperation even in non-repeated games and without any possibility of reputation effects. But most of our interactions are repeated and reputation is always at stake. Thus, if costly punishment is important in promoting cooperation, it must do so in a repeated setting. We have performed experiments in which, in each round of a repeated game, people choose between cooperation, defection and costly punishment. In control experiments, people could only cooperate or defect. Here we show that the option of costly punishment increases the amount of cooperation but not the average payoff of the group. Furthermore, there is a strong negative correlation between total payoff and use of costly punishment. Those people who gain the highest total payoff tend not to use costly punishment: winners don’t punish. This suggests that costly punishment behaviour is maladaptive in cooperation games and might have evolved for other reasons.


Science | 2009

Positive Interactions Promote Public Cooperation

David G. Rand; Anna Dreber; Tore Ellingsen; Drew Fudenberg; Martin A. Nowak

Carrots Are Better Than Sticks The challenge of dealing with freeloaders—who benefit from a common good but refuse to pay their “fair share” of the costs—has often been met in theoretical and laboratory studies by sanctioning costly punishment, in which contributors pay a portion of their benefit so that freeloaders lose theirs. Rand et al. (p. 1272; see the news story by Pennisi and the cover) added a private interaction session after each round of the public goods game during which participants were allowed to reward or punish other members of their group. The outcome showed that reward was as effective as punishment in maintaining a cooperative mindset, and doing so via rewarding interactions allowed the entire group to prosper because less is lost to the costs of punishing. Reward is as good as punishment to promote cooperation, costs less, and increases the share out of resources up for grabs. The public goods game is the classic laboratory paradigm for studying collective action problems. Each participant chooses how much to contribute to a common pool that returns benefits to all participants equally. The ideal outcome occurs if everybody contributes the maximum amount, but the self-interested strategy is not to contribute anything. Most previous studies have found punishment to be more effective than reward for maintaining cooperation in public goods games. The typical design of these studies, however, represses future consequences for today’s actions. In an experimental setting, we compare public goods games followed by punishment, reward, or both in the setting of truly repeated games, in which player identities persist from round to round. We show that reward is as effective as punishment for maintaining public cooperation and leads to higher total earnings. Moreover, when both options are available, reward leads to increased contributions and payoff, whereas punishment has no effect on contributions and leads to lower payoff. We conclude that reward outperforms punishment in repeated public goods games and that human cooperation in such repeated settings is best supported by positive interactions with others.


Trends in Cognitive Sciences | 2013

ReviewFeature ReviewHuman cooperation

David G. Rand; Martin A. Nowak

Why should you help a competitor? Why should you contribute to the public good if free riders reap the benefits of your generosity? Cooperation in a competitive world is a conundrum. Natural selection opposes the evolution of cooperation unless specific mechanisms are at work. Five such mechanisms have been proposed: direct reciprocity, indirect reciprocity, spatial selection, multilevel selection, and kin selection. Here we discuss empirical evidence from laboratory experiments and field studies of human interactions for each mechanism. We also consider cooperation in one-shot, anonymous interactions for which no mechanisms are apparent. We argue that this behavior reflects the overgeneralization of cooperative strategies learned in the context of direct and indirect reciprocity: we show that automatic, intuitive responses favor cooperative strategies that reciprocate.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Dynamic social networks promote cooperation in experiments with humans

David G. Rand; Samuel Arbesman; Nicholas A. Christakis

Human populations are both highly cooperative and highly organized. Human interactions are not random but rather are structured in social networks. Importantly, ties in these networks often are dynamic, changing in response to the behavior of ones social partners. This dynamic structure permits an important form of conditional action that has been explored theoretically but has received little empirical attention: People can respond to the cooperation and defection of those around them by making or breaking network links. Here, we present experimental evidence of the power of using strategic link formation and dissolution, and the network modification it entails, to stabilize cooperation in sizable groups. Our experiments explore large-scale cooperation, where subjects’ cooperative actions are equally beneficial to all those with whom they interact. Consistent with previous research, we find that cooperation decays over time when social networks are shuffled randomly every round or are fixed across all rounds. We also find that, when networks are dynamic but are updated only infrequently, cooperation again fails. However, when subjects can update their network connections frequently, we see a qualitatively different outcome: Cooperation is maintained at a high level through network rewiring. Subjects preferentially break links with defectors and form new links with cooperators, creating an incentive to cooperate and leading to substantial changes in network structure. Our experiments confirm the predictions of a set of evolutionary game theoretic models and demonstrate the important role that dynamic social networks can play in supporting large-scale human cooperation.


Journal of Experimental Psychology: General | 2012

Divine intuition: Cognitive style influences belief in God

Amitai Shenhav; David G. Rand; Joshua D. Greene

Some have argued that belief in God is intuitive, a natural (by-)product of the human mind given its cognitive structure and social context. If this is true, the extent to which one believes in God may be influenced by ones more general tendency to rely on intuition versus reflection. Three studies support this hypothesis, linking intuitive cognitive style to belief in God. Study 1 showed that individual differences in cognitive style predict belief in God. Participants completed the Cognitive Reflection Test (CRT; Frederick, 2005), which employs math problems that, although easily solvable, have intuitively compelling incorrect answers. Participants who gave more intuitive answers on the CRT reported stronger belief in God. This effect was not mediated by education level, income, political orientation, or other demographic variables. Study 2 showed that the correlation between CRT scores and belief in God also holds when cognitive ability (IQ) and aspects of personality were controlled. Moreover, both studies demonstrated that intuitive CRT responses predicted the degree to which individuals reported having strengthened their belief in God since childhood, but not their familial religiosity during childhood, suggesting a causal relationship between cognitive style and change in belief over time. Study 3 revealed such a causal relationship over the short term: Experimentally inducing a mindset that favors intuition over reflection increases self-reported belief in God.


Physics Reports | 2017

Statistical physics of human cooperation

Matjaz Perc; Jillian J. Jordan; David G. Rand; Zhen Wang; Stefano Boccaletti; Attila Szolnoki

Abstract Extensive cooperation among unrelated individuals is unique to humans, who often sacrifice personal benefits for the common good and work together to achieve what they are unable to execute alone. The evolutionary success of our species is indeed due, to a large degree, to our unparalleled other-regarding abilities. Yet, a comprehensive understanding of human cooperation remains a formidable challenge. Recent research in the social sciences indicates that it is important to focus on the collective behavior that emerges as the result of the interactions among individuals, groups, and even societies. Non-equilibrium statistical physics, in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase transition points, has proven to be very valuable for understanding counterintuitive evolutionary outcomes. By treating models of human cooperation as classical spin models, a physicist can draw on familiar settings from statistical physics. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. The complexity of solutions therefore often surpasses that observed in physical systems. Here we review experimental and theoretical research that advances our understanding of human cooperation, focusing on spatial pattern formation, on the spatiotemporal dynamics of observed solutions, and on self-organization that may either promote or hinder socially favorable states.


Proceedings of the Royal Society of London B: Biological Sciences | 2010

Emotions as infectious diseases in a large social network: the SISa model

Alison L. Hill; David G. Rand; Martin A. Nowak; Nicholas A. Christakis

Human populations are arranged in social networks that determine interactions and influence the spread of diseases, behaviours and ideas. We evaluate the spread of long-term emotional states across a social network. We introduce a novel form of the classical susceptible–infected–susceptible disease model which includes the possibility for ‘spontaneous’ (or ‘automatic’) infection, in addition to disease transmission (the SISa model). Using this framework and data from the Framingham Heart Study, we provide formal evidence that positive and negative emotional states behave like infectious diseases spreading across social networks over long periods of time. The probability of becoming content is increased by 0.02 per year for each content contact, and the probability of becoming discontent is increased by 0.04 per year per discontent contact. Our mathematical formalism allows us to derive various quantities from the data, such as the average lifetime of a contentment ‘infection’ (10 years) or discontentment ‘infection’ (5 years). Our results give insight into the transmissive nature of positive and negative emotional states. Determining to what extent particular emotions or behaviours are infectious is a promising direction for further research with important implications for social science, epidemiology and health policy. Our model provides a theoretical framework for studying the interpersonal spread of any state that may also arise spontaneously, such as emotions, behaviours, health states, ideas or diseases with reservoirs.


Nature Communications | 2014

Humans display a ‘cooperative phenotype’ that is domain general and temporally stable

Alexander Peysakhovich; Martin A. Nowak; David G. Rand

Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behavior in one cooperative context is related to their behavior in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination toward paying costs to benefit others, which we dub the ‘cooperative phenotype’.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Direct reciprocity in structured populations

Matthijs van Veelen; Julián García; David G. Rand; Martin A. Nowak

Reciprocity and repeated games have been at the center of attention when studying the evolution of human cooperation. Direct reciprocity is considered to be a powerful mechanism for the evolution of cooperation, and it is generally assumed that it can lead to high levels of cooperation. Here we explore an open-ended, infinite strategy space, where every strategy that can be encoded by a finite state automaton is a possible mutant. Surprisingly, we find that direct reciprocity alone does not lead to high levels of cooperation. Instead we observe perpetual oscillations between cooperation and defection, with defection being substantially more frequent than cooperation. The reason for this is that “indirect invasions” remove equilibrium strategies: every strategy has neutral mutants, which in turn can be invaded by other strategies. However, reciprocity is not the only way to promote cooperation. Another mechanism for the evolution of cooperation, which has received as much attention, is assortment because of population structure. Here we develop a theory that allows us to study the synergistic interaction between direct reciprocity and assortment. This framework is particularly well suited for understanding human interactions, which are typically repeated and occur in relatively fluid but not unstructured populations. We show that if repeated games are combined with only a small amount of assortment, then natural selection favors the behavior typically observed among humans: high levels of cooperation implemented using conditional strategies.

Collaboration


Dive into the David G. Rand's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Dreber

Stockholm School of Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge