Mathematical foundations of moral preferences
aa r X i v : . [ phy s i c s . s o c - ph ] J a n Mathematical foundations of moral preferences
Valerio Capraro ∗ and Matjaˇz Perc
2, 3, 4, 5, † Department of Economics, Middlesex University,The Burroughs, London NW4 4BT, U.K. Faculty of Natural Sciences and Mathematics,University of Maribor, Koroˇska cesta 160, 2000 Maribor, Slovenia Department of Medical Research, China Medical University Hospital,China Medical University, Taichung 404332, Taiwan Alma Mater Europaea ECM, Slovenska ulica 17, 2000 Maribor, Slovenia Complexity Science Hub Vienna, Josefst¨adterstraße 39, 1080 Vienna, Austria (Dated: January 22, 2021)
Abstract
One-shot anonymous unselfishness in economic games is commonly explained by social preferences,which assume that people care about the monetary payoffs of others. However, during the last ten years,research has shown that different types of unselfish behaviour, including cooperation, altruism, truth-telling,altruistic punishment, and trustworthiness are in fact better explained by preferences for following one’s ownpersonal norms – internal standards about what is right or wrong in a given situation. Beyond better organ-ising various forms of unselfish behaviour, this moral preference hypothesis has recently also been used toincrease charitable donations, simply by means of interventions that make the morality of an action salient.Here we review experimental and theoretical work dedicated to this rapidly growing field of research, andin doing so we outline mathematical foundations for moral preferences that can be used in future models tobetter understand selfless human actions and to adjust policies accordingly. These foundations can also beused by artificial intelligence to better navigate the complex landscape of human morality. ∗ [email protected] † [email protected] . INTRODUCTION Most people are not completely selfish. Given the right circumstances, they are happy to give upa part of their benefit to help other people or the society as a whole. Psychologists and economistshave long observed that some people act unselfishly even in one-shot anonymous interactions,when there are no direct or indirect benefits for doing so [1, 2]. The question is why? Under-standing what motivates people to act unselfishly in one-shot, anonymous interactions is of greattheoretical and practical importance. Theoretically, it may lead to a more complete and precisemathematical framework to formalise human decision-making, while practically, it may suggestpolicies and interventions to promote unselfish behaviour, with the ultimate goal of improving oursocieties.To study one-shot unselfishness, behavioural scientists usually turn to laboratory experimentsusing economic games, in which experimental subjects have to make monetary decisions thatinvolve various forms of other-regarding behaviour. In this context, and throughout this review,selfishness and other-regarding behaviour is defined with respect to monetary payoffs. Clearly,a behaviour that is unselfish from the point of view of monetary outcomes may turn out to beselfish from a more general perspective that takes into account also psychological benefits andcosts. For example, some people may engage in unselfish behaviour to decrease negative mood[3] or increase positive feelings [4]. Therefore, in the last decades, behavioural scientists have beentrying to mathematically explain unselfish behaviour by means of a utility function that dependson factors other than solely the monetary payoff of the decision maker. Based on empirical datascholars have initially advanced the explanation that human unselfishness in one-shot anonymousinteractions is primarily driven by people not caring only about their own monetary payoff, butcaring, at least to some extent, also about the monetary payoffs of the other people involved in theinteraction [5–10].However, about fifteen years ago, this social preference hypothesis came under critique becausesome experiments showed that two particular forms of unselfish behaviour, altruistic punishmentand altruism (see Table 1 for these definitions), could not be entirely explained by preferencesdefined solely over monetary outcomes. In 2010, building on work on the effect of social normson people’s behaviour [11–20], Bicchieri and Chavez [21] proposed to explain altruistic pun-ishment assuming that people have preferences for following their “personal norms” (what theypersonally believe to be the right thing to do) beyond the monetary consequences that this action2rings about. Subsequently, Krupka and Weber [22] proposed to explain altruism using “injunc-tive norms” (what one believes others would approve/disapprove); however, in their analysis, theydid not consider a potential role of personal norms. In the last five years, numerous other exper-iments challenged social preference models in several behavioural domains, other than altruisticpunishment and altruism [23–30]; moreover, the best interpretation of these results turns out to bein terms of personal norms, rather than other types of norms. Namely, the best way to organisethese results is through the moral preference hypothesis, according to which people have prefer-ences for following their personal norms, beyond the economic consequences that these actionsbring about. This framework organises several forms of one-shot, anonymous unselfish behaviour,including cooperation, altruism, altruistic punishment, trustworthiness, honesty, and the equality-efficiency trade-off. We note at this stage that personal norms are not universally given. Theycertainly depend on the culture; for example, they can come from the internalisation of culturalvalues [15]. But they can also depend on the individual; anecdotal evidence suggests that, evenwithin the same family, there might be people with different beliefs about what is right or wrongin a given situations. We will discuss this in more details in Section VII F.The moral preference hypothesis also holds promise of being very useful in practice. The idea issimple. If people care about doing the right thing, then just providing cues that make the rightnessof an action salient should work just fine in promoting desirable behaviour. In fact, researchhas already demonstrated the applicability of this approach outside of the laboratory, showing inparticular that nudges towards doing the right thing can increase charitable donations [31].In the light of ample empirical research supporting the moral preference hypothesis, theoreticalresearch aiming to formalise human decision-making by means of a mathematical framework isalso at a crossroads. On the one hand, the traditional approach involving monetary payoffs hasworked well in explaining many challenging aspects of pro-social behaviour. But on the other,experiments indicate that there are likely hard boundaries to this simplistic approach, which willthus have to be amended by more avant-garde concepts, including formalising the intangibles ofmoral psychology and philosophy.Here we review this rapidly growing field of research within the following sections. SectionII reviews the main economic games that have been developed to study one-shot unselfishness.Section III reviews social preference models, as earlier attempts to explain unselfishness in one-shot economic games within a unified theoretical framework. This section also describes a numberof experiments that violate social preference models. Section IV shows how these experiments3an be organised by general moral preferences for doing what one believes to be the right thing.Section V focuses on practical applications of the moral preference hypothesis. Section VI reviewsthe models of moral preferences that have been introduced so far and proposes a new model thatexplicitly takes into account the importance of personal norms. Lastly, Section VII outlines anumber of key questions for future work, while Section VIII summarises the main conclusions.Taken together, this review thus outlines a mathematical formalism for morality, which shallinform future models aimed at better understanding selfless actions as well as artificial intelligencethat strives to emulate counterintuitive human decision-making.
II. MEASURES OF UNSELFISH BEHAVIOUR
There are various forms of unselfish behaviour. For example, giving money to a homeless per-son on the street is, in principle, quite different from collaborating with a colleague on a commonproject, or from telling the truth when one is tempted to lie. To take this source of heterogeneityinto account, scholars have developed a series of simple games and decision problems that aremeant to prototypically represent different types of unselfish behaviour. These are simple scenar-ios in which experimental subjects can make decisions that have real consequences. To incentivisethese decisions, behavioural scientists usually use monetary payoffs (at least among adult subjects,whereas other forms of remuneration, such as stickers, might be more effective among children).In this review, we will be mainly focused on one-shot decisions that are purely unselfish, mean-ing that they bring no monetary benefit to the decision maker (and possibly bring a cost), nomatter the beliefs of the decision maker regarding the behaviour of other people involved in theinteraction. Specifically, we measure altruistic behaviour using the dictator game (see Table 1 forall the definitions), cooperative behaviour in pairwise interactions using the prisoner’s dilemma,truth-telling using the sender-receiver game, the tradeoff between equality and efficiency using thetrade-off game, trustworthiness using player 2 in the trust game, and altruistic punishment usingplayer 2 in the ultimatum game. In the last section we will also briefly consider decisions thatare strategically unselfish, such as trust (player 1 in the trust game) and strategic fairness (player1 in the ultimatum game), which might actually maximise the payoff of the decision maker, de-pending on their beliefs about the behaviour of the second player. The distinction between pureunselfishness and strategic unselfishness generalises the distinction between pure cooperation andstrategic cooperation, introduced by Rand in his meta-analysis [32], where “pure cooperation” was4efined as paying a cost to benefit another person, regardless of the behaviour of the other person,as opposite of “strategic cooperation”, which might maximise the cooperator’s payoff, dependingon the other person’s behaviour.
III. SOCIAL PREFERENCES AND THEIR LIMITATIONS
Behavioural scientists have long recognised that some people do act unselfishly even in one-shot anonymous interactions. For example, the first comprehensive empirical work on the one-shotprisoner’s dilemma dates back to 1965 [1]. Formal frameworks to explain one-shot unselfishnesshave a more recent history, starting in 1994, when Ledyard observed that cooperation, altruism,and altruistic punishment could be explained by assuming that people maximise a utility functionthat depends not only on their own monetary payoff, but also on the total monetary payoff ofthe other people that are involved in the interaction [5]. See Table 2 for the exact mathematicaldefinition. Since then, several models have been introduced. In 1998, Levine [6] proposed autility function in which the level of altruism depends on the level of altruism of the other players.Subsequently, in 1999, Fehr and Schmidt [7] proposed a framework according to which playerscare about minimising inequities. In 2000, Bolton and Ockenfels [8] followed a similar idea andintroduced a general inequity aversion model, in which the utility of an action depends negativelyon the distance between the amount of money the decision maker gets if that action is implementedand the amount of money the decision maker would get if the equal allocation were implemented.The authors proposed an explicit mathematical formula only for the case of n = 2 players. In 2002,Andreoni and Miller [9] estimated the behaviour of experimental subjects in a number of dictatorgame choices using a specific utility function taking into account altruistic tendencies as well aspotential convexity in the preferences. In the same year, Charness and Rabin [10] introduced ageneral utility function which, depending on the relative relationship between its two parameters,can cover several cases, including competitive preferences, inequity aversion preferences, andsocial efficiency preferences. We refer to Table 2 for the exact functional forms. (Besides thesemodels, scholars have also studied models that can be applied to specific subsets of one-shotanonymous interactions (e.g., [4]). In this review, we focus on models that can be applied to anyone-shot anonymous interaction involving unselfish behaviour).While differing in many details, all social preference models share one common property: theyassume that the utility of a decision maker is a function of the monetary payoffs of the available5ctions. This assumption came under considerable criticism for the first time in 2003 when Falk,Fehr and Fischbacher [33] showed that rejection rates in the ultimatum game depend on the choiceset available to the proposer. Specifically, the split (8,2) — 8 to the proposer and 2 to the responder— is more likely to be accepted in ultimatum games in which the only other choice available tothe proposer is (10,0), compared to ultimatum games in which the only other choice available tothe proposer is (5,5). Therefore, responders prefer accepting (8,2) over rejecting it in the formercase, but they prefer rejecting (8,2) over accepting it in the latter one, despite the fact that thesechoices have the same monetary consequences in the two cases. Clearly, this cannot be explainedby any model of social preferences. See [21, 26] for conceptual replications.Shortly after, in 2005, Uri Gneezy introduced the sender-receiver game [34]. In his experi-ments, decision makers were less likely to implement an allocation of money when implementingthis allocation also required misreporting a private information. Also this finding cannot be ex-plained by any model of social preferences and, more generally, also not by any utility functionthat depends only on the monetary payoffs that are associated with the available actions. This thusindicates that (some) people have an intrinsic cost of lying, which goes beyond their preferencestoward monetary outcomes. To further support this interpretation, several scholars have indepen-dently studied the sender-receiver game in contexts in which lying would benefit both the senderand the receiver to the same extent. This case is particularly important because, when the benefitfor the sender is equal to the benefit for the receiver, all social preference models predict that thetotality of people would lie. However, this prediction turned out to be violated in experiments,which showed that a significant proportion of people tell the truth [23, 35, 36].Subsequently social preference models came under critique also in one of the behavioural do-mains in which they had been most successful, namely in research involving the dictator game.Dana, Cain and Dawes [37] and Lazear, Malmendier and Weber [38] observed that some dictatorgame givers would prefer to altogether avoid the dictator game interaction if given the chance.These people thus preferred giving over keeping in a context in which they were forced to play thedictator game, but preferred keeping over giving in a context in which they could choose whetherto play the dictator game or not. This finding, as in the preceding examples, cannot be explainedby any utility function that is based solely on monetary outcomes.For the same game, and along similar lines, List [39], Bardsley [40], and Cappelen et al. [41]found that extending the choice set of the dictator by adding the possibility to take money fromthe recipient has the effect to make some dictators less likely to give. Therefore, these dictators6referred giving over keeping, when the taking option was not available, but preferred keepingover giving, when the taking option was available. This finding likewise cannot be explained byany preference over monetary payoffs. A conceptually similar point was also made by Krupka andWeber [22] and Capraro and Vanzo [29], who found that even minor changes in the instructions ofthe dictator game can notably impact people’s behaviour.In the years after 2013, the inability of purely monetary-based models to explain empiricallyobserved behaviour engulfed many other games and decision problems, whose experimental reg-ularities had been previously thought to be explainable in terms of social preferences. Examplesincluded the prisoner’s dilemma [25, 27], the trust game [25], as well as different variants of thetrade-off game [27, 28, 30], thus resulting in a crisis of the social preference hypothesis. IV. THE RISE OF THE MORAL PREFERENCE HYPOTHESIS
To solve a crisis, one needs a paradigm shift. The shift started in 2010, when Bicchieri andChavez [21] proposed an elegant solution for one of the aforementioned empirical observations.This solution builds on classic work suggesting that, in everyday life, people’s behaviour is partlydetermined by what they believe to be the norms in a given context [11–20]. This observationled behavioural scientists to propose several classifications of norms. Particularly relevant for thethesis of this review is the distinction between personal and social norms [15]. And moreover,among the social norms, the distinction between injunctive and descriptive norms [17]. Personalnorms refer to internal standards about what is right or wrong in a given situation; injunctive normsrefer to what other people approve or disapprove of in that situation; descriptive norms refer towhat other people actually do. In one-shot anonymous games, like the games considered in thisreview, the distinction among personal, descriptive, and injunctive norms roughly corresponds toBicchieri’s personal normative beliefs, empirical expectations, and normative expectations [18].See Table 3 for precise definitions.The groundbreaking intuition of Bicchieri and Chavez [21] was to apply the theory of norms todeviations from monetary-based social preferences in the ultimatum game. Specifically, Bicchieriand Chavez showed that the ultimatum game offer that is consider to be fair by responders dependson the choice set available to the proposer; moreover, responders tend to reject offers that theyconsider unfair. This suggests that altruistic punishment is driven by responders following theirpersonal norms, beyond the monetary consequences that these actions bring about. In particular,7his explains the aforementioned results of Falk, Fehr, and Fischbacher [33], that responders rejectthe same offer at different rates depending on the other offers available to the proposer.Shortly after, in 2013, Krupka and Weber [22] applied a similar approach to several variantsof the dictator game. However, instead of focusing on personal norms, they focused on injunctivenorms. For each of the available actions, subjects were asked to declare whether they foundthe corresponding action to be “very socially inappropriate”, “somewhat socially inappropriate”,“somewhat socially appropriate”, or “very socially appropriate”. Subjects were given a monetaryprize if they matched the modal choice made by other participants. Observe that, in this way,Krupka and Weber incentivised the elicitation of the injunctive norm. (The elicitation of personalnorms cannot be incentivised.) In doing so, Krupka and Weber found that people believe thatothers think that avoiding a dictator game interaction is far less socially inappropriate than keepingthe whole amount of money in a dictator game that one is obliged to play. Therefore, the empiricalresults summarised above regarding dictator games with an exit option [37, 38] can be explainedsimply by a change in the perception of what is the injunctive norm in that context. Similarly,Krupka and Weber found that people believe that others think that keeping the money in a dictatorgame with a taking option is far less socially inappropriate than keeping the money in the dictatorgame without the taking option. In this way, they could explain also the results of List [39],Bardsley [40], and Cappelen et al. [41] in terms of a change in the perception of the injunctivenorm. Finally, Krupka and Weber presented a novel experiment in which subjects played thedictator game in either of two variants: in the Standard variant, dictators started with $10 and hadto decide how much of it, if any, to give to the recipient; in the Bully variant, the money wasinitially split equally among the dictator and the recipient, and the dictator could either give, take,or do nothing. The authors found that people were more altruistic in the Bully variant compared tothe Standard variant, and this was driven by the fact that people rated “taking from the recipient”far less socially appropriate than “not giving to the recipient”.The work of Krupka and Weber suggests that taking into account injunctive norms is importantto explain deviations from social preference models in the dictator game. But are the injunctivenorms really the main force behind the observed behavioural changes, or are there also other normsplaying more primary roles? In the last five years, a set of empirical studies tried to address thisquestion. Schram and Charness [24] analysed the behaviour of dictators who were given an advicefrom third parties about the injunctive norm. They observed that dictators became more pro-socialonly when their choices were made public. By contrast, when their choices remained private, they8ound no significant increase in pro-sociality, compared to the case in which they did not receiveany information about the injunctive norm. These results indicate that, although injunctive normsmight correlationally explain behavioural changes in anonymous (and thus private) dictator gameexperiments, they are unlikely to be the primary motivation. In fact, being that these games wereplayed anonymously, in front of the screen of a computer, the intuition suggests that the normsprimarily at play are the personal norms. Two recent works provide evidence for this hypothesis.Capraro and Vanzo [29] found that framing effects in the dictator game generated by morallyloaded instructions can be explained by changes in the perception of what people “personally thinkto be the right thing” in the given context (i.e., their personal norms). Capraro et al. [31] showedthat making personal norms salient prior to playing the dictator game (by asking subjects to statewhat they personally think to be the morally right thing to do) has a strong effect on subsequentdictator game donations, even persisting to a second-stage prisoner’s dilemma interaction.This set of works thus suggests that dictator game giving is driven by personal norms. Puttingthis together with the results of Bicchieri and Chavez, we obtain that both altruism and altruisticpunishment can be explained by people following their personal norms.More recently, this finding has been not only replicated, but, more importantly, also extendedto explain several other forms of unselfish behaviour. In 2016, Kimbrough and Vostroknutov [25]introduced a task “that measures subjects’ preferences for following rules and norms, in a contextthat has nothing to do with social interaction or distributional concerns”. They found that thismeasure of norm-sensitivity predicts dictator game altruism, trust game trustworthiness (but nottrust), and ultimatum game rejection thresholds (but not offers). Taken together, this indicates thataltruism, trustworthiness, and altruistic punishment are driven by a common desire to adhere to apersonal norm. In 2017, Eriksson et al. [26] conducted an ultimatum game experiment under twodifferent conditions. The difference, however, was only in the labels that were used to describe theaction of refusing the proposer’s offer. In one treatment, this action was labeled “rejecting the pro-poser’s offer”, while in the other treatment, the same action was labeled “reducing the proposer’spayoff”. Since these two options are monetarily equivalent, any utility function depending onlyon the monetary payoffs of the available actions predict that responders should behave the sameway in both cases. But contrary to this prediction, Eriksson et al. found that responders displayedhigher rejection thresholds in the “rejection frame” than in the “reduction frame”. Moreover, theyshowed that the observed framing effect could be explained by a change in what people think tobe the right thing to do. Specifically, subjects tended to rate the action of reducing the proposer’s9ffer to be morally worse than the action of rejecting the proposer’s offer, in spite of the fact thatthese two actions had the same monetary consequences. In 2018, Capraro and Rand [27] showedthat behaviour in the trade-off game is highly sensitive to the labels used to describe the availableactions. In line with Eriksson et al. [26], Capraro and Rand also found that their framing effectscould be explained by a change in what people think to be the right thing to do. Notably, fram-ing effects in the trade-off game have been replicated several times [28, 30, 42–44] and a recentwork has shown that these moral framings tap into relatively internalised moral preferences [44].Moreover, Capraro and Rand also considered a situation in which the personal norm conflictedwith the descriptive norm, and found that people tend to follow the personal norm, rather thanthe descriptive norm. The same research also revealed a correlation between the framing effectin the trade-off game and giving in the dictator game and cooperation in the prisoner’s dilemma,thus indicating that not only trade-off decisions are driven by personal norms, but that altruismand cooperation are also subject to that same facilitator. Cooperative behaviour is also typicallycorrelated to altruistic behaviour [45–47], suggesting that they are driven by a common underlyingmotivation.To the best of our knowledge, there are no works directly exploring the role of personal normson truth-telling in the sender-receiver game. However, Biziou-van-Pol et al. [23] have shown thatthere is a positive correlation between truth-telling in the sender-receiver game (in the Pareto whitelie condition), giving in the dictator game, and cooperation in the prisoner’s dilemma, suggestingthat these types of behaviours are driven by a common motivation. Since the aforementionedresearch suggests that altruism and cooperation are driven by personal norms, this correlationsuggests that lying aversion is so too.In sum, research accumulated in the last ten years suggests that several forms of one-shot,anonymous unselfishness, including altruism, altruistic punishment, truth-telling, cooperation,trustworthiness, and the equality-efficiency trade-off, can be explained using a unified theoreticalframework, whereby people have moral preferences for following their personal norms, beyondthe monetary payoff that these actions bring about. Of course, this is not meant to imply that mon-etary payoffs do not play any role in explaining one-shot unselfishness, but simply that somethingelse, in addition to monetary payoffs, should be taken into account. The thesis is that this ‘some-thing else’ are the personal norms, which gives rise to the moral preference hypothesis as describedin Table 4. Also, this is not meant to imply that other types of norms play no role in these forms ofone-shot selfless behaviour. For example, nudging the injunctive norm in the prisoner’s dilemma1031] and in the trade-off game [48] has a similar effect as nudging the personal norm. Moreover,it is possible that social norms ultimately drive personal norms, because they allow to enhance orpreserve one’s sense of self-worth and avoid self-concept distress, resulting in a self-reinforcingbehaviour that eventually benefits one’s own self-image [15]. However, the aforementioned liter-ature suggests that, at a proximate level, personal norms have a greater explanatory power, in thesense that they consistently explain people’s behaviour also in games where injunctive norms havebeen shown to play a limited role (e.g., dictator game) or where descriptive norms play a limitedrole (e.g., the trade-off game).
V. PRACTICAL APPLICATIONS
Behavioural scientists and policy makers have been using norm-based interventions to fosterpro-sociality in real life for decades [49–60]. Although these paternalistic interventions have beencriticised because they subtly violate people’s freedom of choice [61] and can be exploited by ma-licious institutions [62] (see [63] for a response to these critiques), they are well-studied because,compared to standard procedures to foster pro-sociality (punishment and rewards), they allow tosave the monitoring cost that the institution needs to pay in order to know who to punish or reward.Norm-based interventions typically manipulate the descriptive or the injunctive norm in a givencontext, and show that this has an effect on people’s behaviour in that same context. The more re-cent works reviewed in the previous section, showing that unselfish behaviour in one-shot, anony-mous economic games is primarily driven by a desire to follow the personal norms, suggest thata more effective mechanism to increase pro-sociality might be to use norm-based interventionsthat target personal norms, rather than social norms. The interest in targeting personal norms,compared to other mechanisms to promote pro-sociality, is also that targeting personal norms ispotentially cheaper than other mechanisms. Clearly, it is cheaper than punishment and rewards be-cause it avoids the monitoring cost. Additionally, it saves the cost of collecting information aboutthe behaviour or the moral judgments of other people, which forms the basis of interventionstargeting social norms.In recent years, there has been a growing body of research exploring the effect of nudgingpersonal norms on various forms of unselfish behaviour. Some works using economic games foundthat making personal norms salient increases donations in the dictator game [31, 64], cooperationin the prisoner’s dilemma [31, 65], as well as decreases in-group favouritism, at least on average1166]. This suggests that nudging personal norms might be effective to increase pro-sociality inone-shot anonymous decisions that have consequences outside the laboratory. Along these lines,Capraro et al. [31] found that asking people to report what they personally think is the morallyright thing to do increases crowdsourced charitable donations by 44%.
VI. MODELS OF MORAL PREFERENCES
We have thus seen that several forms of unselfish behaviour can be organised by moral prefer-ences for following the personal norms. The question is, can we model this using a formal utilityfunction?There have been some attempts to formalise people’s tendency to follow a norm [22, 25, 67–75]. Most of these models, however, are either very specific in the sense that they can be appliedonly to certain games, or do not distinguish among different types of norms. Three models can beapplied to every game of interest in this review (and, more generally, to every one-shot game) anddistinguish among different types of norms.Levitt and List [68] introduced a model where the utility of an action a depends on the mon-etary payoff associated to that action, v i ( π i ( a )) , as well as on the moral cost (or benefit), m ( a ) ,associated to that action: u i ( a ) = v i ( π i ( a )) + m ( a ) . Levitt and List assumed that the moral cost (or benefit) depends primarily on three factors:whether the action is recorded or performed in the presence of an observer, whether the actionhas negative consequences on other players, and whether the action is in line with “social normsor legal rules that govern behavior in a particular society”. Therefore, Levitt and List’s model,although useful in many circumstances, does only mention social norms, while ignoring the effectof personal norms.A similar model was considered by Krupka and Weber [22], with the key difference that theyfocused on injunctive norms specifically. Krupka and Weber introduced a function N defined overthe set of available actions that, given an action a , returns a number N ( a ) representing the extentto which society views a as socially appropriate. They also assumed that people are heterogeneousin the extent to which they care about doing what society considers to be appropriate. In doing so,they obtain the utility function: 12 i ( a ) = v i ( π i ( a )) + γ i N ( a ) . As mentioned above, one of the main contributions of Krupka and Weber was to introduce anexperimental technique to elicit the injunctive norm. To this end, they asked participants to rateeach of the available actions in terms of their social appropriateness. Participants were incentivisedto match the modal choice of the other participants.Very recently, in 2020, Kimbrough and Vostroknutov presented a different approach, but stillbased on injunctive norms [75]. Specifically, they introduced the utility function u i ( a ) = v i ( π i ( a )) + φ i η ( a ) , where φ i represents the extent to which i cares about following the injunctive norm, and η ( a ) represents a measure of whether the society thinks that a is socially appropriate. Although thisutility function looks very similar to the one proposed by Krupka and Weber, it differs from itin one important dimension. While Krupka and Weber’s social appropriateness, N ( a ) , is com-puted by asking participants what they think others would approve or disapprove (and thereforeit need not depend only on the monetary consequences of the available actions), Kimbrough andVostroknutov’s injunctive norm, η , is built axiomatically from the game and it is assumed to beinversely proportional to the overall dissatisfaction of the players, defined as the difference be-tween what they get in a given scenario and what they could have gotten in others. This impliesthat one limitation of this approach is that people always prefer Pareto dominant allocations overPareto dominated ones. But, in experiments, this property is not always satisfied. For example,when lying is Pareto dominant, some people still tell the truth, and these people tend to cooperatein a subsequent prisoner’s dilemma and give in a subsequent dictator game [23]. Moreover, intrade-off games framed in such a way that the Pareto dominant allocation is presented as morallywrong, people tend to choose the Pareto dominated option [27, 28].In sum, previous formal models consider only social norms or, more specifically, injunctivenorms. But, as we have seen in the previous sections, unselfish behaviour in one-shot anonymousinteractions is often driven by personal norms, rather than by social norms. Taking inspirationfrom the above models, one can formalise this using the utility function: u i ( a ) = v i ( π i ( a )) + µ i P i ( a ) , µ i represents the extent to which player i cares about doing what s/he personally thinksto be the morally right thing to do and P i ( a ) represents the extent to which i personally thinksthat a is morally right. This functional form might superficially seem similar to the ones discussedearlier, but it differs from those in two important points. One point is that the personal norm P i ( a ) typically depends on the individual i , whereas the injunctive norm depends on the societyand the culture in which the individual is embedded. The second point is the very fact that P i represents the extent to which i thinks that a is the morally right thing to do, whereas m ( a ) , N ( a ) ,and η ( a ) represent social norms. In general, the personal norm might not be aligned with thesocial norms. In practice, P i ( a ) can be estimated using a suitable experiment, whereas µ i and v i can be estimated, on average, using statistical techniques, following a similar method as theone developed by Krupka and Weber for injunctive norms [22]. Specifically, one can estimate P i ( a ) by asking subjects to self-report the extent to which they personally think that action a is themorally right thing to do. Then one can use these ratings to predict the behaviour, using a simpleregression. The coefficient of this regression will give the average of the µ i ’s. Also, putting themonetary payoffs in the regression, one can also get an estimation for the average of the v i ’s.This utility function based on personal norms has a greater predictive power than its counter-parts based only on social norms, in the sense that it explains behaviour in a larger set of games,compared to their counterparts based on social norms. We have seen earlier that Schram and Char-ness [24] found that making the injunctive norm salient does not increases altruistic behaviour inthe anonymous dictator game. D’Adda et al. [53] found that making the descriptive norm salienthas only a marginally significant effect on anonymous dictator game giving; this effect also van-ishes in a second interaction, played immediately after. Along the same lines, Dimant, van Kleefand Shalvi [76] found that promoting the injunctive norm and promoting the descriptive norm does not have any effect on people’s honesty in a deception game in which subjects can lie for their ben-efit. On the other hand, numerous works have shown that nudging personal norms impacts severalforms of unselfish behaviour, ranging from altruism [31, 64], altruistic punishment [26], cooper-ation [31, 65], and the equality-efficiency trade-off [27]. Moreover, the effect typically persistsfor at least another interaction and even spills across contexts [31]. All these results are consistentwith a utility function based on personal norms and are not consistent with a utility function basedonly on social norms.We present a summary of all above-discussed moral preference models in Table 5.14 II. FUTURE WORK
This is an exciting field of research, which provides a unified view of human choices in severalcontexts of decision-making, while having, at the same time, significant practical implications.Nonetheless, there are several questions that need to be explored in future research, as detailed inwhat follows and summarised in Table 6.
A. The utility function
From a mechanistic perspective, the moral preference hypothesis raises the question of how canwe express the utility function of a decision maker. Scholars have tried to give mathematical senseto people’s morality since the foundation of mathematical economics [77, 78]. About two centurieslater, the question is still open, even in the simple setting of one-shot anonymous interactions. Onesimple way to do so is to assume that people are torn between maximising their monetary payoffand doing what they personally think to be the morally right thing. This can be done with a utilityfunction of the shape u i ( a ) = v i ( π i ( a )) + µ i P i ( a ) . Although this utility function outperformstheir counterparts based on social norms, as well as social preferences, it undoubtedly representsjust a first candidate. Future work should explore other ways to formalise moral preferences,through finer experiments with the power to detect small variations in how people weight theirpersonal norm against monetary incentives. Future work should also find ways to estimate whatpeople think to be the right thing in a given context, without asking it to the participants in aseparate experiment. The literature reviewed above shows that, in many cases, it is enough tochange only one word in the instructions of a decision problem to change people’s perception ofwhat is the right thing to do in a given context. This suggests that P i ( a ) partly depends on thelanguage in which the action a is presented. Exploring this dependence can greatly improve thepredictive power of the utility function. How can one do so? Recent work shows that emotionalcontent in messages increases their diffusion in social media [79–81]. Translating this findingin the context of one-shot games, it suggests that the emotions carried by the instructions of thedecision problem might contribute to the computation of P i . Along these lines, it is possible thatone can use sentiment analysis to better estimate P i . Sentiment analysis is a technique developedby computational linguists that allows to assign a polarity to any given piece of text [82]. Inprinciple, this polarity could enter the utility function of a decision maker and work as an additional15otivation or obstacle for choosing an action, beyond its monetary consequences. In any case,mathematically describing or at least quantifying the seemingly intangible moral preferences, andin doing so building bridges between computational linguistics, behavioural economics, and moralpsychology, is a fascinating direction for future work. B. Evolution of norms
Where do personal norms come from? One explanation is that they come from the internali-sation of behaviours that, although not individually optimal in the short term, are optimal in thelong run. It is therefore important to understand which unselfish behaviours can be selected inthe long term, and under which conditions. A promising line of research uses evolutionary gametheory and statistical physics to find the conditions that promote the evolution of cooperation onnetworks [83]. More recently, scholars have started applying similar techniques also to study theevolution of other forms of unselfish behaviour [84], such as truth-telling in the sender-receivergame [85, 86] and trustworthiness in the trust game [87]. Some works along this line have alsolooked at the evolution of choices in the ultimatum game [88–91]. Future work should extend thesame techniques to other forms of unselfish behaviour.
C. Personal norms versus social norms
The experimental literature reviewed in the previous sections suggests that several forms ofone-shot, anonymous unselfishness can be unified under a framework according to which peoplehave preferences for following their personal norms. Moreover, preliminary evidence suggeststhat nudging personal norms can be an effective tool for fostering pro-sociality: making personalnorms salient affects altruism, cooperation, altruistic punishment, and trade-off decisions betweenequality and efficiency [26, 31, 64, 65].This, of course, does not mean that the social norms play no role at all. For example, nudginginjunctive norms has a significant effect on the one-shot, anonymous, prisoner’s dilemma [31]and the trade-off game [48]. One question that is still open, however, is whether these effectsare fundamentally distinct from the effect of nudging personal norms. It is indeed possible thatnudging injunctive norms in these games also nudge personal norms, and this is what makes peoplechange their behaviour. A working paper suggests that people who follow injunctive norms in the16rade-off game are different from those who follow personal norms [48]. Therefore, it is possiblethat a larger model taking into account both personal and injunctive norms might have an evengreater predictive power, at least in some contexts, than a model based exclusively on personalnorms. Further experiments comparing the effect of nudging different norms are needed to clarifythis point. The evidence in this case is indeed still lacunar. One study compared the relative effectof the descriptive and the injunctive norms in the dictator game, and found that people tend tofollow the descriptive norm [49]. Another study compared the relative effect of nudging personalnorms and the descriptive norms in the trade-off game, and found that people tend to follow thepersonal norms [27]. The aforementioned working paper compared the effect of nudging thepersonal and the injunctive norm in the trade-off game and found that they have a similar effect;moreover, when the two norms are in conflict, some people follow the personal norm and otherfollow the injunctive norm [48]. This suggests that people’s behaviour depends on their focus ofattention within an interconnected matrix of norms. Therefore, future work should explore normsalience, also in cases where more than one type of norm is simultaneously made salient.Research should also go beyond anonymous decisions and investigate what happens whenchoices are observable. The intuition suggests that when choices are observable, social normsmay play a bigger role compared to when they remain private; in line with this intuition, Schramand Charness [24] showed that nudging the injunctive norms impacts public but not private dic-tator game giving. However, no studies compared the relative effectiveness of targeting differentnorms in public decisions.
D. Boundary conditions of interventions based on personal norms
Having in mind potential practical applications, another important question concerns theboundary conditions of interventions based on personal norms. From a temporal perspective,previous research suggests that interventions targeting personal norms can last for several interac-tions within the same experiment [31, 65]. However, it seems unrealistic to expect that their effectwill last indefinitely. For example, a recent field experiment targeting injunctive norms foundan effect that diminishes after repeated interventions, although it can be restored after waiting asufficient amount of time between interventions [92]. From the decisional context point of view,there will certainly be behavioural domains in which targeting personal norms might not be aseffective. For example, a recent work suggests that risky cooperation in the stag-hunt game is17rimarily driven by preferences for efficiency, rather than by preferences for following personalnorms [42].
E. External validity of interventions based on personal norms
Given the potential relevance of this line of work for the society at large, future studies shouldexplore the external validity of interventions based on personal norms. At the time of this writing,only one study investigated the effect of nudging personal norms in contexts in which decisionshave consequences outside the laboratory. This study found that nudging personal norms increasescrowdsourced charitable donations to real humanitarian organisations by 44% [31].
F. The moral phenotype and its topology
We have seen that different forms of unselfish behaviour can be explained by a general tendencyto do the right thing. We are tempted to call this tendency “moral phenotype”, extending the notionof “cooperative phenotype” introduced by Peysakhovich, Nowak, and Rand [46]. See also [47]. Intheir work, Peysakhovich and colleagues observed that pro-social behaviours in the dictator game,the public goods game (a variant of the prisoner’s dilemma with more than two players), and thetrust game (both players) were all correlated; and they termed this general pro-social tendencycooperative phenotype. Therefore, the cooperative phenotype is uni-dimensional. On the otherhand, the moral phenotype is likely to be multi-dimensional. For example, we have seen earlierthat both altruistic punishment and altruistic giving are driven by preferences for doing the rightthing. However, Peysakhovich, Nowak, and Rand [46] found that they are not correlated. It ispossible that they are not correlated because they come from different personal norms. The multi-dimensionality of morality is not a new idea, and several authors have come to suggest it in thelast decades from different routes. For example, Haidt and colleagues argue that differences inpeople’s moral concerns can be explained by individual differences across six “foundations” [93–95]. Kahane, Everett and colleagues have shown that (act) utilitarianism decomposes itself in atleast two dimensions [96, 97]. Curry, Mullins, and Whitehouse [98] have reported that seven moralrules are universal across societies, but societies vary on how they rank them. However, we arenot aware of any work exploring how different personal norms link to different forms of one-shotunselfishness. 18nother topological property of the moral phenotype that deserves further scrutiny is theboundary. Does, for example, the moral phenotype include decisions that are strategically un-selfish, such as strategic fairness (ultimatum game offers) and trust (trust game transfers), both ofwhich maximise the decision maker’s payoff depending on the decision maker’s beliefs about thebehaviour of the other player? Previous evidence is limited and mixed. Bicchieri and Chavez [21]showed that ultimatum game offers are partly driven by normative beliefs; Peysakhovich, Nowak,and Rand [46] found that trustees’ decisions correlate with dictator game and public goods gamedecisions. By contrast, Kimbrough and Vostroknutov [25] found that trustees’ and proposers’decisions are not correlated to their measure of norm-sensitivity.
G. A dual-process approach to personal norms
Do personal norms come out automatically, or do they require deliberation? Research recentlyexplored the cognitive basis of unselfish behaviour, by using cognitive process manipulation, suchas time pressure and cognitive load, in order to favour instinctive responses [99–108]. It has beenshown that promoting intuition favours cooperation [32] and altruistic punishment [109]. Theevidence regarding altruism is instead more mixed [110, 111]. Instead, a meta-analysis suggeststhat intuition decreases truth-telling, when lying harms abstract others, while leaving it unaffectedwhen it harms concrete others [112]. Furthermore, results are inconclusive in the context of trust-worthiness and the equality-efficiency trade-off (see [113] for a review). This line of work suggeststhat whether personal norms come out automatically or require deliberation may not have a generalanswer, but might depend on the specific behavioural context, and possibly also on the individualcharacteristics of the decision maker. More work is needed to understand which personal norms,in which context, and for which people, become internalised as automatic reactions.
VIII. CONCLUSIONS
The moral preference hypothesis is emerging as a unified framework to explain a wide range ofone-shot, anonymous unselfish behaviours, including cooperation, altruism, altruistic punishment,truth-telling, trustworthiness, and the equality-efficiency trade-off. This framework has promisingpractical implications, given that interventions making personal norms salient have been shownto be effective at increasing charitable donations. Future work should explore further mathemat-19cal formalisations of moral preferences in terms of a utility function, investigate the evolutionand internalisation of personal norms, study the external validity and the boundary conditions ofpolicy interventions based on personal norms, compare the relative effectiveness of targeting dif-ferent types of norms, examine the topology of the moral phenotype, and analyse the cognitivefoundations of morality, possibly using a dual-process perspective.Overall, the goal of this line of research should be to build bridges between different scientificdisciplines to arrive at a better, perhaps more mechanistic, explanation of human decision-making.The outlined mathematical formalism for morality should be used to inform future models aimedat better understanding selfless actions, and it should also be used in artificial intelligence to betternavigate the complex landscape of human morality and to better emulate human decision-making.Ultimately, the goal is to use the obtained insights to develop more efficient policies and inter-ventions to increase good virtues and decrease bad ones, and to collectively strive towards betterhuman societies.The past century has seen strict compartmentalisation of different scientific disciplines leadingto groundbreaking and important discoveries that might had been impossible without it. But whiletechnology and industry might fare well on idiosyncratic breakthroughs, human societies do not.The grandest challenges of today remind us that sustainable social welfare and organisation requirea wholesome interdisciplinary and cross-disciplinary approach, and we hope this review will bean inspiration towards this goal.
ACKNOWLEDGMENTS
This work was supported by the Slovenian Research Agency (Grant Nos. P1-0403, J1-2457,J4-9302, and J1-9112). [1] Rapoport, A., Chammah, A. M., and Orwant, C. J.
Prisoner’s dilemma: A study in conflict andcooperation , University of Michigan Press, (1965).[2] Engel, C. Dictator games: A meta study.
Experimental Economics , 583–610 (2011).[3] Cialdini, R. B., Darby, B. L., and Vincent, J. E. Transgression and altruism: A case for hedonism. Journal of Experimental Social Psychology (6), 502–516 (1973).
4] Andreoni, J. Impure altruism and donations to public goods: A theory of warm-glow giving.
TheeEonomic Journal (401), 464–477 (1990).[5] Ledyard, J. O. Public goods: A survey of experimental research. In Handbook of ExperimentalEconomics, J., K. and A., R., editors. Princeton Univ. Press, Princeton, NJ (1995).[6] Levine, D. K. Modeling altruism and spitefulness in experiments.
Review of Economic Dynamics (3), 593–622 (1998).[7] Fehr, E. and Schmidt, K. M. A theory of fairness, competition, and cooperation. The QuarterlyJournal of Economics (3), 817–868 (1999).[8] Bolton, G. E. and Ockenfels, A. Erc: A theory of equity, reciprocity, and competition.
The AmericanEconomic Review (1), 166–193 (2000).[9] Andreoni, J. and Miller, J. Giving according to garp: An experimental test of the consistency ofpreferences for altruism. Econometrica (2), 737–753 (2002).[10] Charness, G. and Rabin, M. Understanding social preferences with simple tests. The QuarterlyJournal of Economics (3), 817–869 (2002).[11] Smith, A.
The theory of moral sentiments . Penguin, (2010).[12] Durkheim, ´E.
Les r`egles de la m´ethode sociologique . Flammarion, (2017).[13] Parsons, T.
The Structure of social action: A Study in Social Theory with Special Reference to aGroup of Recent European Writers . Free Press, New York: London, (1937).[14] Geertz, C.
The interpretation of cultures , volume 5019. Basic books, (1973).[15] Schwartz, S. H. Normative influences on altruism. In
Advances in experimental social psychology ,volume 10, 221–279. Elsevier (1977).[16] Elster, J. Social norms and economic theory.
Journal of Economic Perspectives (4), 99–117 (1989).[17] Cialdini, R. B., Reno, R. R., and Kallgren, C. A. A focus theory of normative conduct: recycling theconcept of norms to reduce littering in public places. Journal of personality and social psychology (6), 1015–1026 (1990).[18] Bicchieri, C. The grammar of society: The nature and dynamics of social norms . Cambridge Uni-versity Press, (2005).[19] Bicchieri, C.
Norms in the wild: How to diagnose, measure, and change social norms . OxfordUniversity Press, (2016).[20] Hawkins, R. X., Goodman, N. D., and Goldstone, R. L. The emergence of social norms and conven-tions.
Trends in cognitive sciences (2), 158–169 (2019).
21] Bicchieri, C. and Chavez, A. Behaving as expected: Public information and fairness norms.
Journalof Behavioral Decision Making (2), 161–178 (2010).[22] Krupka, E. L. and Weber, R. A. Identifying social norms using coordination games: Why doesdictator game sharing vary? Journal of the European Economic Association (3), 495–524 (2013).[23] Biziou-van Pol, L., Haenen, J., Novaro, A., Occhipinti Liberman, A., and Capraro, V. Does tellingwhite lies signal pro-social preferences? Judgment and Decision Making , 538–548 (2015).[24] Schram, A. and Charness, G. Inducing social norms in laboratory allocation choices. ManagementScience (7), 1531–1546 (2015).[25] Kimbrough, E. O. and Vostroknutov, A. Norms make preferences social. Journal of the EuropeanEconomic Association (3), 608–638 (2016).[26] Eriksson, K., Strimling, P., Andersson, P. A., and Lindholm, T. Costly punishment in the ultimatumgame evokes moral concern, in particular when framed as payoff reduction. Journal of ExperimentalSocial Psychology , 59–64 (2017).[27] Capraro, V. and Rand, D. G. Do the right thing: Experimental evidence that preferences for moralbehavior, rather than equity or efficiency per se, drive human prosociality. Judgment and DecisionMaking , 99–111 (2018).[28] Tappin, B. M. and Capraro, V. Doing good vs. avoiding bad in prosocial choice: A refined testand extension of the morality preference hypothesis. Journal of Experimental Social Psychology ,64–70 (2018).[29] Capraro, V. and Vanzo, A. The power of moral words: Loaded language generates framing effects inthe extreme dictator game. Judgment and Decision Making , 309–317 (2019).[30] Huang, L., Lei, W., Xu, F., Yu, L., and Shi, F. Choosing an equitable or efficient option: A distributiondilemma. Social Behavior and Personality: An international journal (10), 1–10 (2019).[31] Capraro, V., Jagfeld, G., Klein, R., Mul, M., and van de Pol, I. Increasing altruistic and cooperativebehaviour with simple moral nudges. Scientific Reports (1), 1–11 (2019).[32] Rand, D. G. Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics andself-interested deliberation. Psychological Science (9), 1192–1206 (2016).[33] Falk, A., Fehr, E., and Fischbacher, U. On the nature of fair behavior. Economic inquiry (1), 20–26(2003).[34] Gneezy, U. Deception: The role of consequences. American Economic Review (1), 384–394(2005).
35] Cappelen, A. W., Sørensen, E. Ø., and Tungodden, B. When do we lie?
Journal of EconomicBehavior & Organization , 258–265 (2013).[36] Erat, S. and Gneezy, U. White lies. Management Science (4), 723–733 (2012).[37] Dana, J., Cain, D. M., and Dawes, R. M. What you don’t know won’t hurt me: Costly (but quiet) exitin dictator games. Organizational Behavior and human decision Processes (2), 193–201 (2006).[38] Lazear, E. P., Malmendier, U., and Weber, R. A. Sorting in experiments with application to socialpreferences.
American Economic Journal: Applied Economics (1), 136–63 (2012).[39] List, J. A. On the interpretation of giving in dictator games. Journal of Political economy (3),482–493 (2007).[40] Bardsley, N. Dictator game giving: altruism or artefact?
Experimental Economics (2), 122–133(2008).[41] Cappelen, A. W., Nielsen, U. H., Sørensen, E. Ø., Tungodden, B., and Tyran, J.-R. Give and take indictator games. Economics Letters (2), 280–283 (2013).[42] Capraro, V., Rodriguez-Lara, I., and Ruiz-Martos, M. J. Preferences for efficiency, rather than pref-erences for morality, drive cooperation in the one-shot stag-hunt game.
Journal of Behavioral andExperimental Economics (2020).[43] Capraro, V. Gender differences in the trade-off between objective equality and efficiency.
Judgmentand Decision Making (4), 534–544 (2020).[44] Capraro, V., Jordan, J. J., and Tappin, B. M. Does observability amplify sensitivity to moral frames?evaluating a reputation-based account of moral preferences. Journal of Experimental Social Psychol-ogy (2021).[45] Capraro, V., Jordan, J. J., and Rand, D. G. Heuristics guide the implementation of social preferencesin one-shot prisoner’s dilemma experiments.
Scientific Reports , 6790 (2014).[46] Peysakhovich, A., Nowak, M. A., and Rand, D. G. Humans display a ‘cooperative phenotype’that isdomain general and temporally stable. Nature communications (1), 1–8 (2014).[47] Reigstad, A. G., Strømland, E. A., and Tingh¨og, G. Extending the cooperative phenotype: Assessingthe stability of cooperation across countries. Frontiers in psychology , 1990 (2017).[48] Human, S. J. and Capraro, V. The effect of nudging personal and injunctive norms on the trade-offbetween objective equality and efficiency. Available at https://psyarxiv.com/mx27g/ (2020).[49] Bicchieri, C. and Xiao, E. Do the right thing: but only if others do so.
Journal of Behavioral DecisionMaking (2), 191–208 (2009).
50] Krupka, E. and Weber, R. A. The focusing and informational effects of norms on pro-social behavior.
Journal of Economic Psychology (3), 307–320 (2009).[51] Zafar, B. An experimental investigation of why individuals conform. European Economic Review (6), 774–798 (2011).[52] Raihani, N. J. and McAuliffe, K. Dictator game giving: The importance of descriptive versus injunc-tive norms. PloS ONE (12), e113826 (2014).[53] d’Adda, G., Capraro, V., and Tavoni, M. Push, don’t nudge: Behavioral spillovers and policy instru-ments. Economics Letters , 92–95 (2017).[54] Frey, B. S. and Meier, S. Social comparisons and pro-social behavior: Testing” conditional coopera-tion” in a field experiment.
American Economic Review (5), 1717–1722 (2004).[55] Croson, R. T., Handy, F., and Shang, J. Gendered giving: The influence of social norms on thedonation behavior of men and women. International Journal of Nonprofit and Voluntary SectorMarketing (2), 199–213 (2010).[56] Cialdini, R. B., Kallgren, C. A., and Reno, R. R. A focus theory of normative conduct: A theoreticalrefinement and reevaluation of the role of norms in human behavior. In Advances in experimentalsocial psychology , volume 24, 201–234. Elsevier (1991).[57] Ferraro, P. J. and Price, M. K. Using nonpecuniary strategies to influence behavior: evidence from alarge-scale field experiment.
Review of Economics and Statistics (1), 64–73 (2013).[58] Agerstr¨om, J., Carlsson, R., Nicklasson, L., and Guntell, L. Using descriptive social norms to in-crease charitable giving: The power of local norms. Journal of Economic Psychology , 147–153(2016).[59] Goldstein, N. J., Cialdini, R. B., and Griskevicius, V. A room with a viewpoint: Using social normsto motivate environmental conservation in hotels. Journal of Consumer Research (3), 472–482(2008).[60] Hallsworth, M., List, J. A., Metcalfe, R. D., and Vlaev, I. The behavioralist as tax collector: Us-ing natural field experiments to enhance tax compliance. Journal of Public Economics , 14–31(2017).[61] Hausman, D. M. and Welch, B. Debate: To nudge or not to nudge.
Journal of Political Philosophy (1), 123–136 (2010).[62] Glaeser, E. L. Paternalism and psychology. Technical report, National Bureau of Economic Research,(2005).
63] Sunstein, C. R.
Why nudge?: The politics of libertarian paternalism . Yale University Press, (2014).[64] Bra˜nas-Garza, P. Promoting helping behavior with framing in dictator games.
Journal of EconomicPsychology (4), 477–486 (2007).[65] Dal B ´o, E. and Dal B ´o, P. “Do the right thing:” the effects of moral suasion on cooperation. Journalof Public Economics , 28–38 (2014).[66] Bilancini, E., Boncinelli, L., Capraro, V., Celadin, T., and Di Paolo, R. “Do the right thing” forwhom? an experiment on ingroup favouritism, group assortativity and moral suasion.
Judgment andDecision Making , 182–192 (2020).[67] B´enabou, R. and Tirole, J. Incentives and prosocial behavior. American Economic Review (5),1652–1678 (2006).[68] Levitt, S. D. and List, J. A. What do laboratory experiments measuring social preferences revealabout the real world? Journal of Economic Perspectives (2), 153–174 (2007).[69] L ´opez-P´erez, R. Aversion to norm-breaking: A model. Games and Economic Behavior (1), 237–267 (2008).[70] Andreoni, J. and Bernheim, B. D. Social image and the 50–50 norm: A theoretical and experimentalanalysis of audience effects. Econometrica (5), 1607–1636 (2009).[71] Della Vigna, S., List, J. A., and Malmendier, U. Testing for altruism and social pressure in charitablegiving. Quarterly Journal of Economics (1), 1–56 (2012).[72] Kessler, J. B. and Leider, S. Norms and contracting.
Management Science (1), 62–77 (2012).[73] Alger, I. and Weibull, J. W. Homo moralis—preference evolution under incomplete information andassortative matching. Econometrica (6), 2269–2302 (2013).[74] Kimbrough, E. and Vostroknutov, A. Injunctive norms and moral rules. Technical report, mimeo,Chapman University and Maastricht University, (2020).[75] Kimbrough, E. O. and Vostroknutov, A. A theory of injunctive norms. Technical report, mimeo,Chapman University and Maastricht University, (2020).[76] Dimant, E., van Kleef, G. A., and Shalvi, S. Requiem for a nudge: Framing effects in nudginghonesty. Journal of Economic Behavior and Organization , 247–266 (2020).[77] Jevons, W. S.
The theory of political economy . Macmillan, (1879).[78] Bentham, J.
The collected works of Jeremy Bentham: An introduction to the principles of morals andlegislation . Clarendon Press, (1996).[79] Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., and Van Bavel, J. J. Emotion shapes the diffusion f moralized content in social networks. Proceedings of the National Academy of Sciences (28),7313–7318 (2017).[80] Brady, W. J., Wills, J. A., Burkart, D., Jost, J. T., and Van Bavel, J. J. An ideological asymmetry inthe diffusion of moralized content on social media among political leaders.
Journal of ExperimentalPsychology: General (10), 1802 (2019).[81] Brady, W. J., Crockett, M., and Van Bavel, J. J. The mad model of moral contagion: The role of moti-vation, attention and design in the spread of moralized content online.
Perspectives on PsychologicalSciences (4), 978–1010 (2020).[82] Pang, B., Lee, L., and Vaithyanathan, S. Thumbs up?: Sentiment classification using machine learn-ing techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural languageprocessing-Volume 10 , 79–86. Association for Computational Linguistics, (2002).[83] Perc, M., Jordan, J. J., Rand, D. G., Wang, Z., Boccaletti, S., and Szolnoki, A. Statistical physics ofhuman cooperation.
Phys. Rep. , 1–51 (2017).[84] Capraro, V. and Perc, M. Grand challenges in social physics: In pursuit of moral behavior.
Front.Phys. , 107 (2018).[85] Capraro, V., Perc, M., and Vilone, D. The evolution of lying in well-mixed populations. Journal ofthe Royal Society Interface (156), 20190211 (2019).[86] Capraro, V., Perc, M., and Vilone, D. Lying on networks: The role of structure and topology inpromoting honesty. Phys. Rev. E , 032305 (2020).[87] Kumar, A., Capraro, V., and Perc, M. The evolution of trust and trustworthiness.
Journal of theRoyal Society Interface (169), 20200491 (2020).[88] Page, K. M., Nowak, M. A., and Sigmund, K. The spatial ultimatum game. Proc. R. Soc. Lond. B , 2177–2182 (2000).[89] Killingback, T. and Studer, E. Spatial ultimatum games, collaborations and the evolution of fairness.
Proc. R. Soc. Lond. B , 1797–1801 (2001).[90] Iranzo, J., Flor´ıa, L., Moreno, Y., and S´anchez, A. Empathy emerges spontaneously in the ultimatumgame: Small groups and networks.
PLoS ONE , e43781 (2011).[91] Szolnoki, A., Perc, M., and Szab´o, G. Defense mechanisms of empathetic players in the spatialultimatum game. Phys. Rev. Lett. , 078701 (2012).[92] Ito, K., Ida, T., and Tanaka, M. Moral suasion and economic incentives: Field experimental evidencefrom energy demand.
American Economic Journal: Economic Policy (1), 240–67 (2018).
93] Haidt, J. and Joseph, C. Intuitive ethics: How innately prepared intuitions generate culturally variablevirtues.
Daedalus (4), 55–66 (2004).[94] Graham, J., Haidt, J., and Nosek, B. A. Liberals and conservatives rely on different sets of moralfoundations.
Journal of Personality and Social Psychology (5), 1029–1046 (2009).[95] Haidt, J. The righteous mind: Why good people are divided by politics and religion . Vintage, (2012).[96] Kahane, G., Everett, J. A., Earp, B. D., Caviola, L., Faber, N. S., Crockett, M. J., and Savulescu, J.Beyond sacrificial harm: A two-dimensional model of utilitarian psychology.
Psychological Review (2), 131–164 (2018).[97] Everett, J. A. and Kahane, G. Switching tracks? Towards a multidimensional model of utilitarianpsychology.
Trends in Cognitive Sciences (2), 124–134 (2020).[98] Curry, O. S., Mullins, D. A., and Whitehouse, H. Is it good to cooperate. Current Anthropology (1), 47–69 (2019).[99] Rand, D., Greene, J., and Nowak, M. Spontaneous giving and calculated greed. Nature , 427–430(2012).[100] Andersen, S., Gneezy, U., Kajackaite, A., and Marx, J. Allowing for reflection time does not changebehavior in dictator and cheating games.
Journal of Economic Behavior & Organization , 24–33(2018).[101] Bereby-Meyer, Y., Hayakawa, S., Shalvi, S., Corey, J. D., Costa, A., and Keysar, B. Honesty speaksa second language.
Topics in cognitive science , 1–12 (2018).[102] Bouwmeester, S., Verkoeijen, P. P., Aczel, B., Barbosa, F., B`egue, L., Bra˜nas-Garza, P., Chmura,T. G., Cornelissen, G., Døssing, F. S., Esp´ın, A. M., et al. Registered replication report: Rand,Greene, and Nowak (2012).
Perspectives on Psychological Science (3), 527–542 (2017).[103] Capraro, V., Schulz, J., and Rand, D. G. Time pressure and honesty in a deception game. Journal ofBehavioral and Experimental Economics , 93–99 (2019).[104] Capraro, V., Corgnet, B., Esp´ın, A. M., and Hern´an-Gonz´alez, R. Deliberation favours social effi-ciency by making people disregard their relative shares: evidence from usa and india. Royal Societyopen science (2), 160605 (2017).[105] Chen, F. and Fischbacher, U. Cognitive processes underlying distributional preferences: A responsetime study. Experimental Economics , 1–26 (2019).[106] Chuan, A., Kessler, J. B., and Milkman, K. L. Field study of charitable giving reveals that reciprocitydecays over time.
Proceedings of the National Academy of Sciences (8), 1766–1771 (2018). Journal of Experimental Social Psychology , 76–81 (2017).[108] Holbein, J. B., Schafer, J. P., and Dickinson, D. L. Insufficient sleep reduces voting and otherprosocial behaviours. Nature human behaviour (5), 492 (2019).[109] Hallsson, B. G., Siebner, H. R., and Hulme, O. J. Fairness, fast and slow: A review of dual processmodels of fairness. Neuroscience & Biobehavioral Reviews , 49–60 (2018).[110] Rand, D. G., Brescoll, V. L., Everett, J. A., Capraro, V., and Barcelo, H. Social heuristics and socialroles: Intuition favors altruism for women but not for men. Journal of Experimental Psychology:General (4), 389 (2016).[111] Fromell, H., Nosenzo, D., and Owens, T. Altruism, fast and slow? evidence from a meta-analysisand a new experiment.
Experimental Economics , 979–1001 (2020).[112] K ¨obis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., and Shalvi, S. Intuitive honesty versusdishonesty: Meta-analytic evidence. Perspectives on Psychological Science (5), 778–796 (2019).[113] Capraro, V. The dual-process approach to human sociality: A review. Available at SSRN 3409146 (2019).[114] Gneezy, U., Rockenbach, B., and Serra-Garcia, M. Measuring lying aversion.
Journal of EconomicBehavior & Organization , 293–300 (2013). able 1: Glossary of games and unselfish behavioursDictator game : We measure altruistic behaviour using the dictator game. The dictator is givena certain amount of money and has to decide how much of it, if any, to give to the recipient ,who starts with nothing. The recipient is passive. Prisoner’s dilemma : We measure cooperative behaviour using the prisoner’s dilemma. Twoplayers simultaneously decide whether to cooperate or to defect. Cooperating means paying acost c to give a benefit b > c to the other player; defecting means doing nothing. Sender-Receiver game : We measure lying aversion using the sender-receiver game. The sender is given a private information and has to report it to the receiver . In some experimentsthe receiver is passive [23, 114], in others is active [34, 36]. Here we focus on the case in whichthe receiver is passive. In this case, if the sender reports the truthful information, then the senderand the receiver are paid according to Option A; if the sender reports an untruthful information,then the sender and the receiver are paid according to Option B. Only the sender knows the ex-act payoffs associated to the two options. Depending on these payoffs, one can classify lies intofour main classes: black lies are those that benefit the sender at a cost to the receiver; altruisticwhite lies are those that benefit the receiver at a cost to the sender; Pareto white lies are thosethat benefit both the sender and the receiver; spiteful lies are those that harm both the senderand the receiver.
Trade-Off game : We measure the trade-off between equality and efficiency using the trade-offgame. A decision-maker has to decide between two possible allocations of money that affectpeople other than the decision-maker. One decision is equal (i.e., all people involved in theinteraction receive the same monetary payoff), the other decision is efficient (i.e., the sum ofthe monetary payoffs of all people is greater than it is in the equal allocation).
Trust game : We measure trustworthiness using the second player in the trust game. The truster is given a certain amount of money and has to decide how much of it, if any, to transfer to the trustee . The amount sent to the trustee is multiplied by a constant (usually equal to 3) and givento the trustee. The trustee decides how much of the amount s/he received to return to the truster.
Ultimatum game : We measure altruistic punishment using the second player in the ultimatumgame. The proposer makes an offer about how to split a sum of money between him/herselfand the responder . The responder decides whether to accept or reject the offer. If the offeris accepted, the proposer and the responder get paid according to the agreed offer; if the offeris rejected neither the proposer nor the responder get any money. Rejecting a low offer isconsidered to be a measure of altruistic punishment.29 able 2: Social preference models
Let x i be the monetary payoff of player i . Social preference models assume that the utility func-tion of player i , u i , is defined over the monetary payoffs that are associated with the availableactions. The main functional forms that have been proposed are the following. Ledyard (1994) : u i ( x , . . . , x n ) = x i + α i P j = i x j , where α i is an individual parameter repre-senting i ’s level of altruism. People with α i = 0 maximise their monetary payoff; people with α i > are altruistic; people with α i < are spiteful. Levine (1998) : u i ( x , . . . , x n ) = x i + P j = i α i + λα j λ x j , where α i is an individual parameterrepresenting i ’s level of altruism, whereas λ ∈ [0 , is a parameter representing how sensitiveplayers are to the level of altruism of the other players. Fehr and Schmidt (1999) : u i ( x , . . . , x n ) = x i − α i n − P j = i max( x j − x i , − β i n − P j = i max( x i − x j , , where α i , β i are individual parameters representing the extent towhich player i cares about disadvantageous and advantageous inequities, respectively Bolton and Ockenfels (2000) : u i ( x , x ) = α i x i − β i (cid:16) σ i − (cid:17) , where σ i = x i x + x , with σ i = if x + x = 0 , α i > is an individual parameter representing the extent to which player i cares about their own monetary payoff, and β i > is an individual parameter representing theextent to which player i cares about minimising the distance between their share and the fairshare. Andreoni and Miller (2002) : u ( x , x ) = ( α x ρ + (1 − α ) x ρ ) /ρ , where α representsthe extent to which the dictator cares about their own payoff, whereas ρ takes into account apotential convexity in the preferences. Charness and Rabin (2002) : u ( x , x ) = ( ρ r + σ s ) x +(1 − ρ r − σ s ) x . Depending on therelative relationship between ρ and σ , this utility function can cover several cases, includingcompetitive preferences, inequity aversion preferences, and social efficiency preferences.30 able 3: The classification of norms Behavioural scientists have long been aware of the fact that people’s behaviour in a given con-text is influenced by what are perceived to be the norms in that context. In the same context,multiple norms might be at play. Scholars have proposed several norm classifications. In thisreview, we will be mainly concerned with the following three.
Schwartz [15] classified norms into two main categories, namely personal norms and socialnorms . Personal norms refer to internal standards about what is right and what is wrong in agiven context. Social norms refer to rules and standards of behaviour that affect the choices ofindividuals without the force of law. Social norms are typically externally motivated.
Cialdini, Reno and Kallgren [17] focused on social norms and classified them into two maincategories, namely injunctive norms and descriptive norms . Injunctive norms refer to what peo-ple think others would approve or disapprove. Descriptive norms refer to what others actuallydo.
Bicchieri [18] proposed a classification in three main categories, namely personal normativebeliefs , empirical expectations , and normative expectations . Personal normative beliefs refer topersonal beliefs about what should happen in a given situation. Empirical expectations refer topersonal beliefs about how others would behave in a given situation. Normative expectationsrefer to personal beliefs about what others think one should do.Therefore, to the extent to which people believe that what should (or should not) happen ina given situation corresponds to their internal standards about what is right (or wrong), thenBicchieri’s personal normative beliefs correspond to Schwartz’s personal norms. In one-shotanonymous games (where decision makers receive no information about the behaviour of otherpeople playing in the same role), descriptive norms correspond to empirical expectations (wereplace the actual behaviour of others with the beliefs). Finally, normative expectations corre-spond to injunctive norms. Therefore, at least for the games and decision problems consideredin this review, Bicchieri’s classification can be interpreted as a synthesis of the previous twoclassifications. 31 able 4: The moral preference hypothesis Previous work explained unselfish behaviour in one-shot, anonymous economic games usingsocial preferences defined over monetary outcomes. According to this “social preference hy-pothesis”, some people act unselfishly because they do not only care about their own monetarypayoff, but they also care about the monetary payoffs of other people. However, especially inthe last five years, numerous experiments challenged social preference models. The best wayto organise these results is through the moral preference hypothesis, according to which peoplehave preferences for following their own personal norms – what they think to be the right thingto do – beyond the monetary consequences that these actions bring about. This framework out-performs the social preference hypothesis at organising cooperation in the prisoner’s dilemma,altruism in the dictator game, altruistic punishment in the ultimatum game, trustworthiness inthe trust game, truth-telling in the sender-receiver game, and trade-off decisions between equal-ity and efficiency in the trade-off game. 32 able 5: Moral preference models
Let a be an action for player i . Moral preference models assume that the utility function ofplayer i , u i , describes a tension between the material payoff associated to a , v i ( π i ( a )) , and themoral utility. The main functional forms that have been proposed are the following. Levitt and List (2007) : u i ( a ) = v i ( π i ( a )) + m ( a ) . The moral cost or benefit associated to a , m ( a ) , is assumed to depend on whether the action is observable, on the material consequencesof that action, and on the set of social norms and rules in place in the society where the decisionmaker lives. Krupka and Weber (2013) : u i ( a ) = v i ( π i ( a ))+ γ i N ( a ) , where γ i is the extent to which i caresabout following the injunctive norm and N ( a ) represents the extent to which society views a associally appropriate. Kimbrough and Vostroknutov (2020) : u i ( a ) = v i ( π i ( a )) + φ i η ( a ) , where φ i is the extentto which i cares about following the injunctive norm and η ( a ) represents the extent to whichsociety views a as socially appropriate. (The main difference between η ( a ) and N ( a ) regardsthe way they are computed.) Our proposal: u i ( a ) = v i ( π i ( a )) + µ i P i ( a ) , where µ i represents the extent to which i caresabout following their own personal norms and P i ( a ) represents the extent to which i personallythinks that a is the right thing to do. 33 able 6: Outstanding challengesable 6: Outstanding challenges