On Formalizing Fairness in Prediction with Machine Learning
aa r X i v : . [ c s . L G ] M a y On Formalizing Fairness in Prediction with Machine Learning
Pratik Gajane Mykola Pechenizkiy Abstract
Machine learning algorithms for prediction areincreasingly being used in critical decisions af-fecting human lives. Various fairness formaliza-tions, with no firm consensus yet, are employedto prevent such algorithms from systematicallydiscriminating against people based on certain at-tributes protected by law. The aim of this articleis to survey how fairness is formalized in the ma-chine learning literature for the task of predictionand present these formalizations with their cor-responding notions of distributive justice fromthe social sciences literature. We provide theo-retical as well as empirical critiques of these no-tions from the social sciences literature and ex-plain how these critiques limit the suitability ofthe corresponding fairness formalizations to cer-tain domains. We also suggest two notions ofdistributive justice which address some of thesecritiques and discuss avenues for prospective fair-ness formalizations.
1. Introduction
Discrimination refers to unfavourable treatment of peopledue to the membership to certain demographic groups thatare distinguished by the attributes protected by law (hence-forth, protected attributes ). Discrimination, based on manyattributes and in several domains, is prohibited by interna-tional legislation. Nowadays, machine learning algorithmsare increasingly being used in high-impact domains such ascredit, employment, education, and criminal justice whichare prone to discrimination. The goal of fairness in pre-diction with machine learning is to design algorithms thatmake fair predictions devoid of discrimination.
The aim of this article is to survey how fairness is for-malized in the machine learning literature and presentthese formalizations with their corresponding notionsfrom the social sciences literature.
The fairness formal- Department of Computer Science, Montanuniversitat Leoben,Austria, [email protected] Department of ComputerScience, TU Eindhoven, the Netherlands, [email protected]. izations in the machine learning literature correspond to thenotions of distributive justice from the social sciences liter-ature, as we discuss in Section 2. Since, some formaliza-tions of fairness can be conflicting with others, the predic-tions produced by the algorithms using them would vastlydiffer as well. Therefore, from the practical point of view,it is important to study how fairness is formalized in the ma-chine learning literature and the implications of various for-malizations. To this end, we present theoretical as well asempirical critiques of their corresponding notions fromthe social sciences literature.
The co-presentation is withthe intention to assist in determining the suitability of theexisting formalizations of fairness in machine learningliterature and building newer formalizations of fairness .In Section 3, we nominate two notions from the socialsciences literature which answer some of the critiques ofthe existing formalizations in the machine learning litera-ture. Lastly, in Section 4, we discuss avenues for prospec-tive fairness formalizations. We begin by formulating theproblem of prediction with machine learning.
Mathematical formulation of prediction with machinelearning:
Let X , A and Z represent a set of individuals i.e.a population , protected attributes and remaining attributesrespectively. Each of the individuals can be assigned an out-come from a finite set Y . Some of the prediction outcomesare considered to be more beneficial or desirable than oth-ers. For an individual x i ∈ X , let y i be the true outcome (orlabel) to be predicted. A (possibly randomized) predictorcan be represented by a mapping H : X → Y from popu-lation X to the set of outcomes Y , such that H ( x i ) is thepredicted outcome for individual x i . A group-conditionalpredictor consists of a set of mappings, one for each groupof the population, H = {H S } for all S ⊂ X. For the sakeof simplicity, assume that the groups induce a partition ofthe population.
2. What is fair? (Formalizations of fairness inprediction with machine learning)
The first step in formalizing fairness in prediction with ma-chine learning is to answer the following two questions: n Formalizing Fairness in Prediction with Machine Learning
Table1. The surveyed formalizations of fairness
Parity PreferenceTreatment Unawareness Preferred treatmentCounterfactual measuresGroup fairnessImpact Individual fairness Preferred impactEquality of opportunity • Parity or preference? : whether fairness meansachieving parity or satisfying the preferences. • Treatment or impact? : whether fairness is to bemaintained in treatment or impact (results).Next, we will see the existing formalizations of fairness inthe machine learning literature. Table 1 summarizes howthey answer the questions presented above.
Any predictor which is not group-conditional satisfies thismeasure. Formally, it is defined as follows:
Definition 1 ( fairness through unawareness ) A predictor issaid to achieve fairness through unawareness if protectedattributes are not explicitly used in the prediction process. A number of proposed predictors in the machine learningliterature satisfy this measure (15; 29), while some don’t(7; 21; 25). However, satisfying fairness through unaware-ness is not a sufficient condition to avoid discriminationwhen other background knowledge is available (33). Fur-thermore, some of the assumptions made during the con-struction of a predictor might not hold in real-life scenarios(8) which leads to discrimination even while satisfying thismeasure.From the point of view of distributive justice, fairnessthrough unawareness corresponds to the approach of be-ing “blind” to counter discrimination. However, variousdiscriminatory practices have been documented followingrace-blind approach in education, housing, credit, criminaljustice system (6; 44). It has shown that, in the long run,race-blind approach is less efficient than race-conscious ap-proach (17). Alternatively, some studies show that a blindapproach can work for some specific tasks (20).The above critiques challenge the suitability of fairnessthrough unawareness to domains in which, protected at-tributes can be deduced from easily available non-protectedattributes and structural barriers, which hinder the pro-tected groups, are shown to be present by credible surveys.
These measures model fairness through tools from causalinterference. Kusner et al. (27) recently introduced onesuch measure which can be defined as follows:
Definition 2
A predictor H is counterfactually fair, given Z = z and A = a , for all y and a = a ′ , iff P {H A = a = y | Z = z, A = a } = P {H A = a ′ = y | Z = z, A = a } In the above definition, H A = a is to be interpreted as theoutcome of the predictor H if A had taken value a . For themathematical details of how such a statement is realized,refer to Kusner et al. (27). This measure deems a predictorto be fair if its output remains the same when the protectedattribute is flipped to its counterfactual value. This measurecompares every individual with a different version of them-selves. A similar measure was introduced independently byKilbertus et al. (26).In the literature of social sciences, the closest correspon-dent to these measures is the theory for counterfactual rea-soning given by Lewis (28). There has been research to in-dicate that counterfactual reasoning is susceptible to hind-sight bias (34; 38) and outcome bias (i.e. evaluating thequality of a decision when its outcome is already known)(4). Moreover, it has been argued that counterfactual rea-soning may negatively influence the process of identifyingcausality (39; 9).These critiques bring into question the suitability of coun-terfactual measures for potential domains for prediction us-ing machine learning like health-care system or judicial sys-tem where the above-mentioned biases are frequently ob-served. Group fairness imposes the condition that the predictorshould predict a particular outcome for individuals acrossgroups with almost equal probability.
Definition 3 ( Group fairness ) A predictor H : X → Y achieves group fairness with bias ǫ with respect to groups S, T ⊆ X and O ⊆ A being any subset of outcomes iff | P {H ( x i ) ∈ O | x i ∈ S } − P {H ( x j ) ∈ O | x j ∈ T }| ≤ ǫ From the above definition it is clear that, group fairnessimposes the condition of statistical and demographic parityon the predictor. Unlike some of the other formalizationsof fairness, group fairness is independent of the “groundtruth” i.e. the label information. This is useful when reli-able ground truth information is not available e.g. in do-mains like employment, housing, credit and criminal jus-tice, discrimination against protected groups has been well-documented (31; 45). Alternatively, in the cases wheredisproportionality in the respective outcomes can be justi-fied by using non-protected attributes (which don’t merelyserve as a proxy for protected attributes), imposing statisti-cal parity leads to incorrect outcomes and may amount todiscrimination against qualified candidates (29). Anotherdeficiency of group fairness is that the predictor is not stip- n Formalizing Fairness in Prediction with Machine Learning ulated to select the most “qualified” individuals within thegroups as long as it maintains statistical parity (15).The formalization of group fairness follows from the no-tion of collectivist egalitarianism for distributive justice. Inpractice, the biggest (in terms of the number of people af-fected) implementation of group fairness is the applicationof affirmative action (11) in India and USA to address dis-crimination on the basis of caste (14), race and gender. SeeWeisskopf (46) for arguments made for and against affirma-tive action polices in both India and the USA. Two of thestandard objections to group fairness are: it is not merito-cratic and it reduces efficiency.The underlying assumption behind the first claim is thatthe allocation of social benefits without affirmative action ismeritocratic. However, several studies (12; 5; 32) have con-firmed discrimination on the basis of protected attributes.For the second claim, Holzer and Neumark (23) concludeon the basis of several studies that “the empirical caseagainst Affirmative Action on the grounds of efficiency isweak at best”. In India, a study by Deshpande and Weis-skopf (13) found no evidence of loss in efficiency becauseof affirmative action policies. Nonetheless, deficienciesmentioned earlier limit the applicability of group fairness.
Individual fairness ascertains that a predictor is fair if itproduces similar outputs for similar individuals.
Definition 4 ( Individual fairness ) A predictor achieves in-dividual fairness iff H ( x i ) ≈ H ( x j ) | d ( x i , x i ) ≈ where d : X × X → R is a distance metric for individuals. Several works including Dwork et al. (15) and Luong et al.(29) use this notion of fairness. The notion of individualfairness can be then captured by ( D, d ) -Lipschitz propertywhich states that D ( H ( x i ) Y , H ( x j ) Y ) ≤ d ( x i , x j ) where D is a distance measure for distributions. Furthermore,Dwork et al. (15) prove that if a predictor satisfies ( D, d ) -Lipschitz property, then it also achieves statistical paritywith certain bias.In the social sciences literature, this formalization is equiv-alent to individualism egalitarianism . According to Sack-steder (40), this is the formal principle of justice. This no-tion delegates the responsibility of ensuring fairness fromthe predictor to the distance metric. If the distance metricuses the protected attributes directly or indirectly to com-pute the distance between two individuals, a predictor satis-fying Definition 4 could still be discriminatory. Therefore,the potency of this notion of fairness to prevent discrimina-tion depends largely upon the distance metric used. Hence,individual fairness as stated above, can not be consideredsuitable for domains where reliable and non-discriminating distance metric is not available . In the literature of machine learning, the formalizationof equality of opportunity was introduced by Hardt et al.(21). An equivalent formalization was also proposed con-currently and independently by Zafar et al. (48). To formal-ize it, let us consider the case of binary outcomes with asingle beneficial outcome y = 1 . Definition 5 ( Equal opportunity ) A predictor is said tosatisfy equal opportunity with respect to group S iff P {H ( x i ) = 1 | y i = 1 , x i ∈ S } = P {H ( x j ) = 1 | y j =1 , x j ∈ X \ S } . It can be considered as a stipulation which states that thetrue positive rate should be the same for all the groups. Anequivalent notion proposed by Zafar et al. (48), called dis-parate mistreatment , asks for the equivalence of misclassi-fication rates across the groups.In the social sciences literature, the corresponding notionwas presented by Rawls (37). An essay by Arneson (2)states that equality of opportunity would not be able to copewith the problems of stunted ambition and selection by big-otry . The notion of equality of opportunity has also beencriticized for not considering the effect of discriminationdue to protected attributes like gender (30) and race (43). Ithas been shown that the protected attributes like race andgender affect one’s access to opportunities in domains suchas education, business, politics in many parts of the world(24). The exclusion of attributes like race and gender fromthe list of attributes deemed to be affecting an individual’slife prospects in the notion of equality of opportunity thuscalls into question its suitability to the domains in whichthere exists vast evidence that such attributes do indeed af-fect one’s prospects.
Zafar et al. (47) introduce two preference-based formaliza-tions of fairness. In order to provide the definitions for thesame, the authors first introduce the notion of group bene-fit which is defined as the expected proportion of individ-uals in the group for whom the predictor predicts the ben-eficial outcome. Group benefit can also be defined as theexpected proportion of individuals from the group who re-ceive the beneficial output for whom the true label is thesame. Based on the above notion of group benefit, Zafaret al. (47) provide following two fairness formalizations.
Definition 6 ( Preferred treatment ) A group-conditionalpredictor is said to satisfy preferred treatment if each groupof the population receives more benefit from their respec- Dwork et al. (15) have provided some approaches to builddistance metrics. n Formalizing Fairness in Prediction with Machine Learning tive predictor then they would have received from any otherpredictor i.e. B S ( H S ) ≥ B S ( H T ) for all S, T ⊂ X Definition 7 ( Preferred impact ) A predictor H is said tohave preferred impact as compared to another predictor H ′ if H offers at-least as much benefit as H ′ for all the groups. B S ( H ) ≥ B S ( H ′ ) for all S ⊂ X If a classifier is not group-conditional then, it by defaultsatisfies preferred treatment. In certain applications, theremight not be a single universally accepted beneficial out-come. It is possible that a few individuals from a groupmay prefer another outcome than the one preferred by themajority of the group. In order to alleviate their concerns,the collectivist definition of group benefit needs to be ex-tended to account for individual preferences.In the social sciences literature, the above notion corre-sponds to envy-freeness (3). This notion of fairness is at-tractive because it can be defined in terms of ordinal prefer-ence relations of the utility values of the predictors. On theother hand, Holcombe (22) show that freedom from envy isneither necessary nor sufficient for fairness. For many real-world problems, one needs to find fair and efficient solu-tions amongst the groups. An efficient solution ensures thegreatest possible benefit to the groups. In decision makingproblems, like the domain applications of prediction withmachine learning, it can be formally expressed by the no-tion of
Pareto-efficiency . However, deciding whether thereis a Pareto-efficient envy-free allocation is computationallyvery hard even with simple additive preferences (10).These critiques indicate that the suitability of such envy-free formalizations is limited only to the domains wheresuch an effective and envy-free allocation can be computedeasily.
3. Prospective notions of fairness
In this section, we describe two prospective notions of fair-ness which have not been considered in the literature of ma-chine learning so far. Our intent is to address the critiquethat many of the past formalizations, as seen in Section 2,do not offset for the fact that social benefits are being allo-cated unequally by the algorithms among the people owingto the attributes they had no say in. • Equality of resources:
Dworkin (16) propose the no-tion of equality of resources in which unequal distri-bution of social benefits is only considered fair whenit results from the intentional decisions and actionsof the concerned individuals. Equality of resourcesis ambition-sensitive i.e. each individual’s ambitionsand choices that follow them ascertains the benefits they receive and endowment-insensitive i.e. each indi-vidual’s unchosen circumstances including the naturalendowments should be offset. In the second property,equality of resources differs from equality of oppor-tunity as the latter considers differences in natural en-dowments (including the protected attributes such assex) as facts of nature which need not be adjusted toachieve fairness. • Equality of capability of functioning:
Sen (42) ex-tends the insight that people should not be held respon-sible for attributes they had no say in to include per-sonal attributes which cause difficulty in developing functionings . Functionings are states of “being anddoing”, that is, various states of existence and activ-ities that an individual can undertake. Sen (41; 42)argue that variations related to the protected attributeslike age, sex, gender, race, caste give individuals un-equal powers to achieve goals even when they havethe same opportunities. In order to equalize capabili-ties, people should be compensated for their unequalpowers to convert opportunities into functionings. Tothis point, it sounds similar to quality of resources de-scribed above. Crucially however, the notion of equal-ity of capability calls for addressing inequalities dueto social endowments (e.g. gender) as well as naturalendowments (e.g. sex) , in contrast to the equality ofresources (35).One of the main strengths of this notion of fairness thatit is flexible which allows it to be developed and appliedin many different ways (1). Indeed, this notion has beenused in the foundations of human development paradigmby the United Nations (18; 19). One of the major criti-cism of Equality of capability theory concerns the failureto identify of valuable capabilities (36). Another criticismis that the informational requirement of this approach canbe very high (1). The second criticism applies to equalityof resources as well and it makes exact mathematical for-malizations of these notions a potentially difficult problem.However, the suitability of these prospective formalizations(unlike the current formalizations) to domains in which nat-ural endowments or social endowments or both impede anindividual’s prospect to receive social benefits makes theopen problem of formalizing them worthwhile. We intendthis article to serve as a call for machine learning experts towork on formalizing them.
4. Discussion and further directions
As the field of fairness in machine learning prediction algo-rithms is evolving rapidly, it is important for us to analyzethe fairness formalizations considered so far. To this end,we juxtaposed the fairness notions previously consideredin the machine learning literature with their corresponding n Formalizing Fairness in Prediction with Machine Learning theories of distributive justice in the social sciences litera-ture. We saw the theoretical critique and analysis of thesefairness notions from the social sciences literature. Suchcritiques of the formalizations and experimental studies oftheir use in large-scale practice serve as guiding principleswhile choosing the fairness formalizations to use in partic-ular domains.We also proposed two prospective notions of fairness,which have been studied extensively in the social sciencesliterature. Of course, we do not claim that these notionswill serve as panacea for all the critiques of the current no-tions. Our intention is to initiate a discussion about fair-ness formalizations in prediction with machine learningwhich recognize that - the problem of fair prediction can-not be addressed without considering social issues suchas unequal access to resources and social conditioning.While these factors are difficult to quantify and formal-ize mathematically, it is important to acknowledge theirimpact and attempt to incorporate them in fairness for-malizations.
References [1] Sabina Alkire. 2002.
Valuing Freedoms: Sen’s Capability Approach and Poverty Reduction .Oxford University Press.[2] Richard J. Arneson. 1999. Against Rawlsian Equality of Opportunity.
Philosophical Studies:An International Journal for Philosophy in the Analytic Tradition
93, 1 (1999), 77–112.[3] Christian Arnsperger. 1994. Envy-Freeness and Distributive Justice.
Journal of EconomicSurveys
8, 2 (June 1994), 155–186.[4] Jonathan Baron and John Hershey. 1988. Outcome Bias in Decision Evaluation. 54 (051988), 569–79.[5] Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg More Em-ployable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrim-ination.
American Economic Review
94, 4 (September 2004), 991–1013.
DOI:http://dx.doi.org/10.1257/0002828042002561 [6] Eduardo Bonilla-Silva. 2013.
Racism without Racists: Color-Blind Racism and the Persis-tence of Racial Inequality in the United States (4th ed.). Rowman & Littlefield Publishers.[7] Toon Calders and Sicco Verwer. 2010. Three Naive Bayes Approaches for Discrimination-free Classification.
Data Min. Knowl. Discov.
21, 2 (Sept. 2010), 277–292.[8] Toon Calders and Indr˙e ˇZliobait˙e. 2013.
Why Unbiased Computational Processes Can Leadto Discriminative Decision Procedures . Springer Berlin Heidelberg, Berlin, Heidelberg, 43–57.
DOI:http://dx.doi.org/10.1007/978-3-642-30487-3_3 [9] A. P. Dawid. 2000. Causal Inference without Counterfactuals.
J. Amer. Statist. Assoc.
On the Com-plexity of Efficiency and Envy-Freeness in Fair Division of Indivisible Goods with AdditivePreferences . Springer Berlin Heidelberg, Berlin, Heidelberg, 98–110.[11] Ashwini Deshpande. 2013.
Affirmative Action in India . Oxford University Press, NewDelhi.[12] Ashwini Deshpande and Katherine Newman. 2007. Where the Path Leads: The Role ofCaste in Post-University Employment Expectations.
Economic and Political Weekly
42, 41(2007), 4133–4140.[13] Ashwini Deshpande and Thomas E. Weisskopf. 2016. Affirmative Action and Productivityin the Indian Railways.
The Review of Black Political Economy
43, 2 (01 Jun 2016), 245–251.
DOI:http://dx.doi.org/10.1007/s12114-015-9217-2 [14] Louis Dumont. 1980.
Homo Hierarchicus: The Caste System and Its Implications . TheUniversity of Chicago Press.[15] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012.Fairness Through Awareness. In
Proceedings of the 3rd Innovations in Theoretical ComputerScience Conference (ITCS ’12) . [16] Ronald Dworkin. 1981. What is Equality? Part 2: Equality of Resources.
Philosophy andPublic Affairs
10, 4 (1981), 283–345.[17] Roland Fryer, G. Loury, and T. Yuret. 2008. An Economic Analysis of Color-Blind Affirma-tive Action.
Journal of Law, Economics, and Organization
24, 2 (2008), 319–355.[18] Sakiko Fukuda-Parr. 2003. THE HUMAN DEVELOPMENT PARADIGM: OPERA-TIONALIZING SEN’S IDEAS ON CAPABILITIES.
Feminist Economics
9, 2-3 (2003),301–317.[19] Sakiko Fukuda-Parr and A.K. Shiva Kumar. 2003.
Readings in human development : con-cepts, measures and policies for a development paradigm . Oxford University Press.[20] Claudia Goldin and Cecilia Rouse. 2000. Orchestrating Impartiality: The Impact of ”Blind”Auditions on Female Musicians.
American Economic Review
90, 4 (September 2000), 715–741.
DOI:http://dx.doi.org/10.1257/aer.90.4.715 [21] Moritz Hardt, Eric Price, , and Nati Srebro. 2016. Equality of Opportunity in SupervisedLearning. In
Advances in Neural Information Processing Systems .[22] Randall G. Holcombe. 1997. Absence of Envy Does Not Imply Fairness.
Southern EconomicJournal
63, 3 (1997), 797–802.[23] Harry Holzer and David Neumark Neumark. 2000. Assessing Affirmative Action.
Journalof Economic Literature
38, 3 (2000), 483–568.[24] Sarah Iqbal. 2015. Women, business, and the law 2016 : getting to equal. (2015).[25] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-AwareClassifier with Prejudice Remover Regularizer. In
Proceedings of the 2012 European Confer-ence on Machine Learning and Knowledge Discovery in Databases - Volume Part II (ECMLPKDD’12) . 35–50.[26] Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, DominikJanzing, and Bernhard Sch¨olkopf. 2017. Avoiding Discrimination through Causal Reason-ing. In
Advances in Neural Information Processing Systems 30 , I. Guyon, U. V. Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Asso-ciates, Inc., 656–666.[27] Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fair-ness. In
Advances in Neural Information Processing Systems 30 , I. Guyon, U. V. Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Asso-ciates, Inc., 4066–4076.[28] David Lewis. 1973. Causation.
Journal of Philosophy
70, 17 (1973), 556–567.[29] Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. 2011. k-NN As an Implementa-tion of Situation Testing for Discrimination Discovery and Prevention. In
Proceedings of the17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’11) . 502–510.[30] Susan Moller Okin. 1991. Justice, Gender, and the Family.
Philosophy and Public Affairs
20, 1 (1991), 77–97.[31] Devah Pager and Hana Shepherd. 2008. The Sociology of Discrimination: Racial Discrimi-nation in Employment, Housing, Credit, and Consumer Markets.
Annual Review of Sociolgy
34 (2008), 181–209.[32] Devah Pager and Bruce Western. 2006. Race at Work: Realities of Race and CriminalRecord in the New York City Job Market. (2006 2006).[33] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware DataMining. In
Proceedings of the 14th ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining (KDD ’08) .[34] John V. Petrocelli and Steven J. Sherman. 2010. Event detail and confidence in gambling:The role of counterfactual thought reactions.
Journal of Experimental Social Psychology
Political Studies
55, 1 (2007), 133–152.[36] Mozaffar Qizilbash. 1996. Capabilities, wellbeing and human development: Asurvey.
The Journal of Development Studies
33, 2 (1996), 143–162.
DOI:http://dx.doi.org/10.1080/00220389608422460 [37] John Rawls. 1971.
A Theory of Justice . Harvard University Press.[38] Neal J. Roese and James M. Olson. 1996. Counterfactuals, causal attributions, and thehindsight bias: A conceptual integration.
Journal of Experimental Social Psychology
32, 3(1 1 1996), 197–227.[39] Neal J. Roese and James M. Olson. 1997. Counterfactual Think-ing: The Intersection of Affect and Function. Advances in Experi-mental Social Psychology, Vol. 29. Academic Press, 1 – 59.
DOI:http://dx.doi.org/https://doi.org/10.1016/S0065-2601(08)60015-5 [40] William Sacksteder. 1964. The Idea of Justice and the Problem of Argument. Chaim Perel-man.
Ethics
75, 1 (1964), 66–67. n Formalizing Fairness in Prediction with Machine Learning [41] Amartya Sen. 1990. Justice: Means versus Freedoms.
Philosophy and Public Affairs
Inequality Reexamined . Clarendon Press, Oxford. New York: RussellSage Foundation, and Cambridge. MA: Harvard University Press, 1992; Italian translation:Il Mulino, 1994; French translation: Seuil, 2000.[43] Seana Valentine Shiffrin. 2004. Race and Ethnicity, Race, Labor, and the Fair Equality ofOpportunity Principle.
Fordham Law Review
72 (2004). Issue 5.[44] Andrew Taslitz. 2007. Racial Blindsight: The Absurdity of Color-Blind Criminal Justice.
Ohio State Journal of Criminal Law (2007).[45] Kaveh Waddell. 2016. How Algorithms Can Bring Down Minorities’ Credit Scores. (2December 2016).[46] Thomas E. Weisskopf. 2004.
Affirmative action in the United States and India : a com-parative perspective / Thomas E. Weisskopf . Routledge London ; New York. xvi, 286 p. ;pages.[47] M. Zafar, I. Valera, M. Gomez Rodriguez, K. P. Gummadi, and A. Weller. 2017. From Parityto Preference-based Notions of Fairness in Classification.
ArXiv e-prints (June 2017).[48] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi.2017. Fairness Beyond Disparate Treatment &