Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jillian J. Jordan is active.

Publication


Featured researches published by Jillian J. Jordan.


Physics Reports | 2017

Statistical physics of human cooperation

Matjaz Perc; Jillian J. Jordan; David G. Rand; Zhen Wang; Stefano Boccaletti; Attila Szolnoki

Abstract Extensive cooperation among unrelated individuals is unique to humans, who often sacrifice personal benefits for the common good and work together to achieve what they are unable to execute alone. The evolutionary success of our species is indeed due, to a large degree, to our unparalleled other-regarding abilities. Yet, a comprehensive understanding of human cooperation remains a formidable challenge. Recent research in the social sciences indicates that it is important to focus on the collective behavior that emerges as the result of the interactions among individuals, groups, and even societies. Non-equilibrium statistical physics, in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase transition points, has proven to be very valuable for understanding counterintuitive evolutionary outcomes. By treating models of human cooperation as classical spin models, a physicist can draw on familiar settings from statistical physics. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. The complexity of solutions therefore often surpasses that observed in physical systems. Here we review experimental and theoretical research that advances our understanding of human cooperation, focusing on spatial pattern formation, on the spatiotemporal dynamics of observed solutions, and on self-organization that may either promote or hinder socially favorable states.


Scientific Reports | 2015

Heuristics guide the implementation of social preferences in one-shot Prisoner's Dilemma experiments.

Valerio Capraro; Jillian J. Jordan; David G. Rand

Cooperation in one-shot anonymous interactions is a widely documented aspect of human behaviour. Here we shed light on the motivations behind this behaviour by experimentally exploring cooperation in a one-shot continuous-strategy Prisoners Dilemma (i.e. one-shot two-player Public Goods Game). We examine the distribution of cooperation amounts, and how that distribution varies based on the benefit-to-cost ratio of cooperation (b/c). Interestingly, we find a trimodal distribution at all b/c values investigated. Increasing b/c decreases the fraction of participants engaging in zero cooperation and increases the fraction engaging in maximal cooperation, suggesting a role for efficiency concerns. However, a substantial fraction of participants consistently engage in 50% cooperation regardless of b/c. The presence of these persistent 50% cooperators is surprising, and not easily explained by standard models of social preferences. We present evidence that this behaviour is a result of social preferences guided by simple decision heuristics, rather than the rational examination of payoffs assumed by most social preference models. We also find a strong correlation between play in the Prisoners Dilemma and in a subsequent Dictator Game, confirming previous findings suggesting a common prosocial motivation underlying altruism and cooperation.


Nature | 2016

Third-party punishment as a costly signal of trustworthiness

Jillian J. Jordan; Moshe Hoffman; Paul Bloom; David G. Rand

Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.


Cognition | 2015

Costly third-party punishment in young children

Katherine McAuliffe; Jillian J. Jordan; Felix Warneken

Human adults engage in costly third-party punishment of unfair behavior, but the developmental origins of this behavior are unknown. Here we investigate costly third-party punishment in 5- and 6-year-old children. Participants were asked to accept (enact) or reject (punish) proposed allocations of resources between a pair of absent, anonymous children. In addition, we manipulated whether subjects had to pay a cost to punish proposed allocations. Experiment 1 showed that 6-year-olds (but not 5-year-olds) punished unfair proposals more than fair proposals. However, children punished less when doing so was personally costly. Thus, while sensitive to cost, they were willing to sacrifice resources to intervene against unfairness. Experiment 2 showed that 6-year-olds were less sensitive to unequal allocations when they resulted from selfishness than generosity. These findings show that costly third-party punishment of unfair behavior is present in young children, suggesting that from early in development children show a sophisticated capacity to promote fair behavior.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Development of in-group favoritism in children’s third-party punishment of selfishness

Jillian J. Jordan; Katherine McAuliffe; Felix Warneken

Significance Humans are unique among animals in their willingness to cooperate with friends and strangers. Costly punishment of unfair behavior is thought to play a key role in promoting cooperation by deterring selfishness. Importantly, adults sometimes show in-group favoritism in their punishment. To our knowledge, our study is the first to document this bias in children. Furthermore, our results suggest that from its emergence in development, children’s costly punishment shows in-group favoritism, highlighting that group membership provides critical context for understanding the enforcement of fairness norms. However, 8-y-old children show attenuated bias relative to 6-y-olds, perhaps reflecting a motivation for impartially. Our findings thus demonstrate that in-group favoritism has an important influence on human fairness and morality, but can be partially overcome with age. When enforcing norms for cooperative behavior, human adults sometimes exhibit in-group bias. For example, third-party observers punish selfish behaviors committed by out-group members more harshly than similar behaviors committed by in-group members. Although evidence suggests that children begin to systematically punish selfish behavior around the age of 6 y, the development of in-group bias in their punishment remains unknown. Do children start off enforcing fairness norms impartially, or is norm enforcement biased from its emergence? How does bias change over development? Here, we created novel social groups in the laboratory and gave 6- and 8-year-olds the opportunity to engage in costly third-party punishment of selfish sharing behavior. We found that by age 6, punishment was already biased: Selfish resource allocations received more punishment when they were proposed by out-group members and when they disadvantaged in-group members. We also found that although costly punishment increased between ages 6 and 8, bias in punishment partially decreased. Although 8-y-olds also punished selfish out-group members more harshly, they were equally likely to punish on behalf of disadvantaged in-group and out-group members, perhaps reflecting efforts to enforce norms impartially. Taken together, our results suggest that norm enforcement is biased from its emergence, but that this bias can be partially overcome through developmental change.


PLOS ONE | 2013

Contagion of Cooperation in Static and Fluid Social Networks.

Jillian J. Jordan; David G. Rand; Samuel Arbesman; James H. Fowler; Nicholas A. Christakis

Cooperation is essential for successful human societies. Thus, understanding how cooperative and selfish behaviors spread from person to person is a topic of theoretical and practical importance. Previous laboratory experiments provide clear evidence of social contagion in the domain of cooperation, both in fixed networks and in randomly shuffled networks, but leave open the possibility of asymmetries in the spread of cooperative and selfish behaviors. Additionally, many real human interaction structures are dynamic: we often have control over whom we interact with. Dynamic networks may differ importantly in the goals and strategic considerations they promote, and thus the question of how cooperative and selfish behaviors spread in dynamic networks remains open. Here, we address these questions with data from a social dilemma laboratory experiment. We measure the contagion of both cooperative and selfish behavior over time across three different network structures that vary in the extent to which they afford individuals control over their network ties. We find that in relatively fixed networks, both cooperative and selfish behaviors are contagious. In contrast, in more dynamic networks, selfish behavior is contagious, but cooperative behavior is not: subjects are fairly likely to switch to cooperation regardless of the behavior of their neighbors. We hypothesize that this insensitivity to the behavior of neighbors in dynamic networks is the result of subjects’ desire to attract new cooperative partners: even if many of one’s current neighbors are defectors, it may still make sense to switch to cooperation. We further hypothesize that selfishness remains contagious in dynamic networks because of the well-documented willingness of cooperators to retaliate against selfishness, even when doing so is costly. These results shed light on the contagion of cooperative behavior in fixed and fluid networks, and have implications for influence-based interventions aiming at increasing cooperative behavior.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Uncalculating Cooperation Is Used to Signal Trustworthiness

Jillian J. Jordan; Moshe Hoffman; Martin A. Nowak; David G. Rand

Significance Human prosociality presents an evolutionary puzzle, and reciprocity has emerged as a dominant explanation: cooperating today can bring benefits tomorrow. Reciprocity theories clearly predict that people should only cooperate when the benefits outweigh the costs, and thus that the decision to cooperate should always depend on a cost–benefit analysis. Yet human cooperation can be very uncalculating: good friends grant favors without asking questions, romantic love “blinds” us to the costs of devotion, and ethical principles make universal moral prescriptions. Here, we provide the first evidence, to our knowledge, that reputation effects drive uncalculating cooperation. We demonstrate, using economic game experiments, that people engage in uncalculating cooperation to signal that they can be relied upon to cooperate in the future. Humans frequently cooperate without carefully weighing the costs and benefits. As a result, people may wind up cooperating when it is not worthwhile to do so. Why risk making costly mistakes? Here, we present experimental evidence that reputation concerns provide an answer: people cooperate in an uncalculating way to signal their trustworthiness to observers. We present two economic game experiments in which uncalculating versus calculating decision-making is operationalized by either a subject’s choice of whether to reveal the precise costs of cooperating (Exp. 1) or the time a subject spends considering these costs (Exp. 2). In both experiments, we find that participants are more likely to engage in uncalculating cooperation when their decision-making process is observable to others. Furthermore, we confirm that people who engage in uncalculating cooperation are perceived as, and actually are, more trustworthy than people who cooperate in a calculating way. Taken together, these data provide the first empirical evidence, to our knowledge, that uncalculating cooperation is used to signal trustworthiness, and is not merely an efficient decision-making strategy that reduces cognitive costs. Our results thus help to explain a range of puzzling behaviors, such as extreme altruism, the use of ethical principles, and romantic love.


Psychological Science | 2017

Why Do We Hate Hypocrites? Evidence for a Theory of False Signaling

Jillian J. Jordan; Roseanna Sommers; Paul Bloom; David G. Rand

Why do people judge hypocrites, who condemn immoral behaviors that they in fact engage in, so negatively? We propose that hypocrites are disliked because their condemnation sends a false signal about their personal conduct, deceptively suggesting that they behave morally. We show that verbal condemnation signals moral goodness (Study 1) and does so even more convincingly than directly stating that one behaves morally (Study 2). We then demonstrate that people judge hypocrites negatively—even more negatively than people who directly make false statements about their morality (Study 3). Finally, we show that “honest” hypocrites—who avoid false signaling by admitting to committing the condemned transgression—are not perceived negatively even though their actions contradict their stated values (Study 4). Critically, the same is not true of hypocrites who engage in false signaling but admit to unrelated transgressions (Study 5). Together, our results support a false-signaling theory of hypocrisy.


Nature Human Behaviour | 2018

Which accusations stick

Jillian J. Jordan

The social function of witchcraft accusations remains opaque. An empirical study of Chinese villagers shows that the label ‘zhu’influences who interacts across a social network, but appears not to tag defectors in service of promoting cooperation. An open question thus remains: from witchcraft to gossip, which accusations stick?


Social Science Research Network | 2017

Third-Party Punishment as a Costly Signal of High Continuation Probabilities in Repeated Games

Jillian J. Jordan; David G. Rand

Why do individuals pay costs to punish selfish behavior, even as third-party observers? A large body of research suggests that reputation plays an important role in motivating such third-party punishment (TPP). Here we focus on a recently proposed reputation-based account (Jordan et al., 2016) that invokes costly signaling. This account proposed that “trustworthy type�? individuals (who are incentivized to cooperate with others) typically experience lower costs of TPP, and thus that TPP can function as a costly signal of trustworthiness. Specifically, it was argued that some but not all individuals face incentives to cooperate, making them high-quality and trustworthy interaction partners; and, because the same mechanisms that incentivize cooperation also create benefits for using TPP to deter selfish behavior, these individuals are likely to experience reduced costs of punishing selfishness. Here, we extend this conceptual framework by providing a concrete, “from-the-ground-up�? model demonstrating how this process could work in the context of repeated interactions incentivizing both cooperation and punishment. We show how individual differences in the probability of future interaction can create types that vary in whether they find cooperation payoff-maximizing (and thus make high-quality partners), as well as in their net costs of TPP – because a higher continuation probability increases the likelihood of receiving rewards from the victim of the punished transgression (thus offsetting the cost of punishing). We also provide a simple model of dispersal that demonstrates how types that vary in their continuation probabilities can stably coexist, because the payoff from remaining in one’s local environment (i.e. not dispersing) decreases with the number of others who stay. Together, this model demonstrates, from the group up, how TPP can serve as a costly signal of trustworthiness arising from exposure to repeated interactions.

Collaboration


Dive into the Jillian J. Jordan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge