Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities
FFairness for Unobserved Characteristics:Insights from Technological Impacts on Queer Communities
Nenad Tomasev, Kevin R. McKee, Jackie Kay, Shakir Mohamed DeepMind, London, UKnenadt@, kevinrmckee@, kayj@, [email protected]
Abstract
Advances in algorithmic fairness have largely omitted sexualorientation and gender identity. We explore queer concerns inprivacy, censorship, language, online safety, health, and em-ployment to study the positive and negative effects of artificialintelligence on queer communities. These issues underscorethe need for new directions in fairness research that take intoaccount a multiplicity of considerations, from privacy preser-vation, context sensitivity and process fairness, to an aware-ness of sociotechnical impact and the increasingly importantrole of inclusive and participatory research processes. Mostcurrent approaches for algorithmic fairness assume that thetarget characteristics for fairness—frequently, race and legalgender—can be observed or recorded. Sexual orientation andgender identity are prototypical instances of unobserved char-acteristics, which are frequently missing, unknown or funda-mentally unmeasurable. This paper highlights the importanceof developing new approaches for algorithmic fairness thatbreak away from the prevailing assumption of observed char-acteristics.
Introduction
As the field of algorithmic fairness has matured, the waysin which machine learning researchers and developers op-erationalise approaches for fairness have expanded in scopeand applicability. Fairness researchers have made importantadvances and demonstrated how the risks of algorithmicsystems are imbalanced across different characteristics ofthe people who are analysed and affected by classifiers anddecision-making systems (Barocas, Hardt, and Narayanan2019; Fudenberg and Levine 2012). Progress has been par-ticularly strong with respect to race and legal gender. Fair-ness studies have helped to draw attention to racial bias in re-cidivism prediction (Angwin et al. 2016), expose racial andgender bias in facial recognition (Buolamwini and Gebru2018), reduce gender bias in language processing (Boluk-basi et al. 2016; Park, Shin, and Fung 2018), and increasethe accuracy and equity of decision making for child protec-tive services (Chouldechova et al. 2018). Throughout this paper, we distinguish between ‘legal gender’(the gender recorded on an individual’s legal documents, often as-signed to them at birth by the government, physicians or their par-ents) and ‘gender identity’ (an individual’s personal feelings andconvictions about their gender; Brook n.d.).
Algorithms have moral consequences for queer commu-nities, too. However, algorithmic fairness for queer individ-uals and communities remains critically underexplored. Inpart, this stems from the unique challenges posed by study-ing sexual orientation and gender identity. Most definitionsof algorithmic fairness share a basis in norms of egalitari-anism (Barocas, Hardt, and Narayanan 2019; Binns 2018). For example, classification parity approaches to fairness aimto equalise predictive performance measures across groups,whereas anti-classification parity approaches rely on theomission of protected attributes from the decision makingprocess to ensure different groups receive equivalent treat-ment (Corbett-Davies and Goel 2018). An inherent assump-tion of these approaches is that the protected characteristicsare known and available within datasets. Sexual orientationand gender identity are prototypical examples of unobserved characteristics, presenting challenging obstacles for fairnessresearch (Andrus et al. 2020; Jacobs and Wallach 2019).This paper explores the need for queer fairness by review-ing the experiences of technological impacts on queer com-munities. For our discussion, we define ‘queer’ as ‘possess-ing non-normative sexual identity, gender identity, and/orsexual characteristics’. We consider this to include les-bian, gay, bisexual, pansexual, transgender, and asexualidentities—among others. The focus on queer communities is important for severalreasons. Given the historical oppression and contemporarychallenges faced by queer communities, there is a substantialrisk that artificial intelligence (AI) systems will be designedand deployed unfairly for queer individuals. Compoundingthis risk, sensitive information for queer people is usuallynot available to those developing AI systems, rendering theresulting unfairness unmeasurable from the perspective of It is worth noting that certain algorithmic domains supplementegalitarian concerns with additional ethical values and principles.For example, fairness assessments of healthcare applications typi-cally incorporate beneficence and non-malfeasance, two principlescentral to medical ethics (Beauchamp and Childress 2001). Throughout this paper, we use ‘queer’ and ‘LGBTQ+’ in-terchangeably. The heterogeneity of queer communities—and thecomplexity of the issues they face—preclude this work from beingan exhaustive review of queer identity. As a result, there are likelyperspectives that were not included in this manuscript, but that havean important place in broader discussions of queer fairness. a r X i v : . [ c s . C Y ] F e b tandard group fairness metrics. Despite these issues, fair-ness research with respect to queer communities is an un-derstudied area. Ultimately, the experiences of queer com-munities can reveal insights for algorithmic fairness that aretransferable to a broader range of characteristics, includingdisability, class, religion, and race.This paper aims to connect ongoing efforts to strengthenqueer communities in AI research (Agnew et al. 2018; Ag-new, Bilenko, and Gontijo Lopes 2019; John et al. 2020)and sociotechnical decision making (Out in Tech n.d.; Les-bians Who Tech n.d.; Intertech LGBT+ Diversity Forumn.d.; LGBT Technology Institute n.d.) with recent advancesin fairness research, including promising approaches to pro-tecting unobserved characteristics. This work additionallyadvocates for the expanded inclusion of queer voices in fair-ness and ethics research, as well as the broader developmentof AI systems. We make three contributions in this paper:1. Expand on the promise of AI in empowering queer com-munities and supporting LGBTQ+ rights and freedoms.2. Emphasise the potential harms and unique challengesraised by the sensitive and unmeasurable aspects of iden-tity data for queer people.3. Based on use cases from the queer experience, establishrequirements for algorithmic fairness on unobserved char-acteristics. Considerations for Queer Fairness
To emphasise the need for in-depth study of the impact ofAI on queer communities around the world, we explore sev-eral case studies of how AI systems interact with sexualorientation and gender identity. Each of these case studieshighlights both potential benefits and risks of AI applica-tions for queer communities. In reviewing these cases, wehope to motivate the development of technological solutionsthat are inclusive and beneficial to everyone. Importantly,these case studies will demonstrate cross-cutting challengesand concerns raised by unobserved and missing characteris-tics, such as preserving privacy, supporting feature imputa-tion, context-sensitivity, exposing coded inequity, participa-tory engagement, and sequential and fair processes.
Privacy
Sexual orientation and gender identity are highly privateaspects of personal identity. Outing queer individuals—bysharing or exposing their sexual orientation or gender iden-tity without their prior consent—can not only lead to emo-tional distress, but also risk serious physical and socialharms, especially in regions where queerness is openly dis-criminated against (Wang et al. 2019), criminalised (DeJongand Long 2014) or persecuted (Scicchitano 2019). Privacyviolations can thus have major consequences for queer in-dividuals, including infringement upon their basic humanrights (Bosia, McEvoy, and Rahman 2020; Amnesty Inter-national Canada 2015), denial of employment and educationopportunities, ill-treatment, torture, sexual assault, rape, andextrajudicial killings.
Promise
Advances in privacy-preserving machine learn-ing (Nasr, Shokri, and Houmansadr 2018; Bonawitz et al.2017; Jayaraman and Evans 2019) present the possibil-ity that the queer community might benefit from AI sys-tems while minimising the risk of information leakage. Re-searchers have proposed adversarial filters (Zhang et al.2020; Liu, Zhang, and Yu 2017) to obfuscate sensitive infor-mation in images and speech shared online while reducingthe risks of re-identification.Still, challenges remain (Srivastava et al. 2019). More re-search is needed to ensure the robustness of the adversar-ial approaches. The knowledge gap between the privacy andmachine-learning research communities must be bridged forthese approaches to achieve the desired effects. This will en-sure that the appropriate types of protections are includedin the ongoing development of AI solutions (Al-Rubaie andChang 2019).
Risks
A multitude of privacy risks arise for queer peoplefrom the applications of AI systems. We focus in particu-lar on the categorisation of identity from sensitive data, theethical risk of surveillance, and invasions of queer spaces.In 2017, Stanford researchers attempted to build an AI‘gaydar’, a computer vision model capable of guessing a per-son’s sexual orientation from images (Wang and Kosinski2018). The resulting algorithm, a logistic regression modeltrained on 35,326 facial images, achieved a high reported ac-curacy in identifying self-reported sexual orientation acrossboth sexes. The results of this study have since been ques-tioned, largely on the basis of a number of methodologicaland conceptual flaws that discredit the performance of thesystem (Gelman, Marrson, and Simpson 2018). Other algo-rithms designed to predict sexual orientation have sufferedsimilar methodological and conceptual deficiencies. A re-cently released app claimed to be able to quantify the ev-idence of one’s non-heterosexual orientation based on ge-netic data, for example (Bellenson 2019), largely obfuscat-ing the limited ability of genetic information to predict sex-ual orientation (Ganna et al. 2019).Though these specific efforts have been flawed, it is plau-sible that in the near future algorithms could achieve high ac-curacy, depending on the data sources involved. Behaviouraldata recorded online present particular risks to the privacyof sexual orientation and gender identity: after all, the moretime people spend online, the greater their digital footprint.AI ‘gaydars’ relying on an individual’s recorded interestsand interactions could pose a serious danger to the privacyof queer people. In fact, as a result of the long-runningperception of the queer community as a profitable ‘con-sumer group’ by business and advertisers alike, prior effortshave used online data to map ‘queer interests’ in order toboost sales and increase profits (Sender 2018). In at leastone instance, researchers have attempted to use basic socialmedia information to reconstruct the sexual orientation ofusers (Bhattasali and Maiti 2015).The ethical implications of developing such systems forqueer communities are far-reaching, with the potential ofcausing serious harms to affected individuals. Prediction al-gorithms could be deployed at scale by malicious actors, par-icularly in nations where homosexuality and gender non-conformity are punishable offences. In fact, in many suchnations, authorities already use technology to entrap or lo-cate queer individuals through social media and LGBTQ+dating apps (e.g., Culzac 2014). Systems predicting sex-ual orientation may also exacerbate the pre-existing privacyrisks of participating in queer digital spaces. There havebeen recorded cases of coordinated campaigns for outingqueer people, resulting in lives being ruined, or lost dueto suicide (Embury-Dennis 2020). These malicious outingcampaigns have until now been executed at smaller scales.However, recent developments in AI greatly amplify the po-tential scale of such incidents, endangering larger commu-nities of queer people in certain parts of the world. Facialrecognition technology (Voulodimos et al. 2018) could beemployed by malicious actors to rapidly identify individu-als sharing their pictures online, whether publicly or in di-rect messages. Facial recognition could similarly be usedto automatically identify people in captured recordings ofprotests, in queer nightclubs or community spaces, and otherin-person social events. These possibilities highlight the po-tential dangers of AI for state-deployed surveillance technol-ogy. Chatbots have similarly been deployed to elicit privateinformation on dating apps, compromising users’ device in-tegrity and privacy (McCormick 2015). Existing bots arescripted, and therefore can usually be distinguished from hu-man users after longer exchanges. Nonetheless, strong lan-guage models (Brown et al. 2020) threaten to exacerbate thesuch privacy risks, given their ability to quickly adjust to thestyle of communication based on a limited number of ex-amples. These language models amplify existing concernsaround the collection of private information and the com-promising of safe online spaces.In addition to the direct risks to privacy, algorithms in-tended to predict sexual orientation and gender identity alsoperpetuate concerning ideas and beliefs about queerness.Systems using genetic information as the primary input, forexample, threaten to reinforce biological essentialist viewsof sexual orientation and echo tenets of eugenics—a his-torical framework that leveraged science and technology tojustify individual and structural violence against people per-ceived as inferior (Ordover 2003; Wolbring 2001). Morebroadly, the design of predictive algorithms can lead to erro-neous beliefs that biology, appearance or behaviour are theessential features of sexual orientation and gender identity,rather than imperfectly correlated causes, effects or covari-ates of queerness.In sum, sexual orientation and gender identity are associ-ated with key privacy concerns. Non-consensual outing andattempts to infer protected characteristics from other datathus pose ethical issues and risks to physical safety. In orderto ensure queer algorithmic fairness, it will be important todevelop methods that can improve fairness for marginalisedgroups without having direct access to group membershipinformation.
Censorship
Although queer identity is essentially unmeasurable, we be-lieve that its unrestricted outward expression in both physi- cal and virtual spaces is a basic human right. Multiple groupsand institutions around the world violate this right throughcensorship of queer content.This censorship is often justified by its supporters as ‘pre-serving decency’ and ‘protecting the youth’, but in realityleads to the erasure of queer identity. Laws against ‘materialspromoting homosexuality’ were established in the late 1980sin the United Kingdom and repealed as late as 2003 (Bur-ridge 2004). Nations that are considered major world pow-ers have laws banning the portrayal of same-sex romances intelevision shows (e.g., China; Lu and Hunt 2016), the men-tion of homosexuality or transgender identities in public ed-ucation (e.g., state-level laws in the United States; Hoshall2012), or any distribution of LGBT-related material to mi-nors (e.g., Russia; Kondakov 2019). Not only do such lawsisolate queer people from their communities—particularlyqueer youth—they implicitly shame queerness as indecentbehaviour, setting a precedent for further marginalisationand undermining of human rights. Many queer content pro-ducers in such nations have argued that their online contentis being restricted and removed at the detriment of queer ex-pression and sex positivity, as well as at the cost of theirincome (York 2015).
Promise
AI systems may be effectively used to mitigatecensorship of queer content. Machine learning has been usedto analyse and reverse-engineer patterns of censorship. Astudy of 20 million tweets from Turkey employed machinelearning to show that the vast majority of censored tweetscontained political content (Tanash et al. 2015). A statisticalanalysis of Weibo posts and Chinese-language tweets un-covered a set of charged political keywords present in postswith anomalously high deletion rates (Bamman, O’Connor,and Smith 2012). Further study of censorship could be keyto drawing the international community’s attention to hu-man rights violations. It could also potentially be used toempower affected individuals to circumvent these unfair re-strictions. However, a large-scale study of deleted queer con-tent in countries which censor such content has yet to beconducted.
Risk
Although we believe machine learning can be usedto combat censorship, tools for detecting queer digital con-tent can be abused to enforce censorship laws or heteronor-mative cultural attitudes. As social network sites, search en-gines and other media platforms adopt algorithms to moder-ate content at scale, the risk for unfair or biased censorship ofqueer content increases, and governing entities are empow-ered to erase queer identities from the digital sphere (Cobbe2019). Automated content moderation systems are at risk ofcensoring queer expression even when the intention is be-nign, such as protecting users from verbal abuse. To helpcombat censorship restrictions and design fair content mod-eration systems, ML fairness researchers could investigatehow to detect and analyse anomalous omission of informa-tion related to queer identity (or other protected characteris-tics) in natural language and video data.Censorship often goes hand-in-hand with the distortion offacts. Recent advances in generative models have made thefabrication of digital content trivial, given enough data andomputational power (Chesney and Citron 2019). Maliciousand dehumanising misinformation about the queer commu-nity has been used as justification for abuse and suppres-sion throughout history, tracing back to medieval interpre-tations of ancient religious texts (Dynes 2014). Technolog-ical and political solutions to the threat of misinformationare important for protecting queer expression—as well asglobal democracy. The AI community has begun to developmethods to verify authentic data through, for example, opendatasets and benchmarks for detecting synthetic images andvideo (Rossler et al. 2019).While the goal of fairness for privacy is preventing the im-putation of sensitive data, the goal of fairness for censorshipis to reveal the unfair prevention of expression. This dual-ity could surface important technical connections betweenthese fields. In terms of social impact, many people aroundthe world outside of the queer community are negatively af-fected by censorship. Further research in fairness for censor-ship could have far-reaching benefit across technical fields,social groups and borders.
Language
Language encodes and represents our way of thinking andcommunicating about the world. There is a long historyof oppressive language being weaponised against the queercommunity (Nadal et al. 2011; Thurlow 2001), highlightingthe need for developing fair and inclusive language models.Inclusive language (Weinberg 2009) extends beyond themere avoidance of derogatory terms, as there are many waysin which harmful stereotypes can surface. For example, thephrase ‘That’s so gay’ (Chonody, Rutledge, and Smith 2012)equates queerness with badness. Using the term ‘sexual pref-erence’ rather than ‘sexual orientation’ can imply that sex-ual orientation is a volitional choice, rather than an intrin-sic part of one’s identity. Assuming one’s gender identity,without asking, is harmful to the trans community as itrisks misgendering people. This can manifest in the carelessuse of assumed pronouns, without knowledge of an individ-ual’s identification and requested pronouns. Reinforcing bi-nary and traditional gender expression stereotypes, regard-less of intent, can have adverse consequences. The use ofgender-neutral pronouns has been shown to result in lowerbias against women and LGBTQ+ people (Tavits and P´erez2019). To further complicate the matter, words which orig-inated in a derogatory context, such as the label ‘queer’ it-self, are often reclaimed by the community in an act of re-sistance. This historical precedent suggests that AI systemsmust be able to adapt to the evolution of natural languageand avoid censoring language based solely on its adjacencyto the queer community.
Promise
Natural language processing applications perme-ate the field of AI. These applications include use cases ofgeneral interest like machine translation, speech recognition,sentiment analysis, question answering, chatbots and hatespeech detection systems. There is an opportunity to de-velop language-based AI systems inclusively—to overcomehuman biases and establish inclusive norms that would fa-cilitate respectful communication with regards to sexual ori- entation and gender identity (Strengers et al. 2020).
Risks
Biases, stereotypes and abusive speech are persis-tently present in top-performing language models, as a re-sult of their presence in the vast quantities of trainingdata that are needed for model development (Costa-juss`a2019). Formal frameworks for measuring and ensuring fair-ness (Hendrycks et al. 2020a,b; Sheng et al. 2020) in lan-guage are still in nascent stages of development. Thus, forAI systems to avoid reinforcing harmful stereotypes and per-petuating harm to marginalised groups, research on inclusivelanguage requires more attention. For language systems tobe fair, they must be capable of reflecting the contextual na-ture of human discourse.
Fighting Online Abuse
The ability to safely participate in online platforms is crit-ical for marginalised groups to form a community and findsupport (Liu 2020). However, this is often challenging dueto pervasive online abuse (Jane 2020). Queer people arefrequently targets of internet hate speech, harassment andtrolling. This abuse may be directed at the community asa whole or at specific individuals who express their queeridentity online. Adolescents are particularly vulnerable tocyberbullying and the associated adverse effects, includingdepression and suicidal ideation (Abreu and Kenny 2018).Automated systems for moderation of online abuse are apossible solution that can protect the psychological safetyof the queer community at a global scale.
Promise
AI systems could potentially be used to help hu-man moderators flag abusive online content and commu-nication directed at members of marginalised groups, in-cluding the queer community (Saha et al. 2019; Schmidtand Wiegand 2017). A proof of concept for this applica-tion was developed in the Troll Patrol project (Delisle et al.2019; Amnesty International n.d.), a collaboration betweenAmnesty International and Element AI’s former AI for Goodteam. The Troll Patrol project investigated the application ofnatural language processing methods for quantifying abuseagainst women on Twitter. The project revealed concerningpatterns of online abuse and highlighted the technologicalchallenges required to develop online abuse detection sys-tems. Recently, similar systems have been applied to tweetsdirected at the LGBTQ+ community. Machine learning andsentiment analysis were leveraged to predict homophobiain Portuguese tweets, resulting in 89.4% accuracy (Pereira2018). Deep learning has also been used to evaluate the levelof public support and perception of LGBTQ+ rights follow-ing the Supreme Court of India’s verdict regarding the de-criminalisation of homosexuality (Khatua et al. 2019).The ways in which abusive comments are expressed whentargeted at the trans community pose some idiosyncraticresearch challenges. In order to protect the psychologicalsafety of trans people, it is necessary for automated onlineabuse detection systems to properly recognise acts of mis-gendering or ‘deadnaming’. These systems have a simulta-neous responsibility to ensure that deadnames and other sen-sitive information are kept private to the user. It is thereforessential for the queer community to play an active role ininforming the development of such systems.
Risks
Systems developed with the purpose of automat-ically identifying toxic speech could introduce harms byfailing to recognise the context in which speech occurs.Mock impoliteness, for example, helps queer people copewith hostility; the communication style of drag queensin particular is often tailored to be provocative. A recentstudy (Gomes, Antonialli, and Dias Oliva 2019) demon-strated that an existing toxicity detection system would rou-tinely consider drag queens to be as offensive as whitesupremacists in their online presence. The system furtherspecifically associated high levels of toxicity with words like‘gay’, ‘queer’ and ‘lesbian’.Another risk in the context of combating online abuseis unintentionally disregarding entire groups through igno-rance of intersectional issues. Queer people of colour ex-perience disproportionate exposure to online (and offline)abuse (Balsam et al. 2011), even within the queer commu-nity itself. Neglecting intersectionality can lead to dispro-portionate harms for such subcommunities.To mitigate these concerns, it is important for the re-search community to employ an inclusive and participatoryapproach (Martin Jr. et al. 2020) when compiling trainingdatasets for abusive speech detection. For example, there arehomophobic and transphobic slurs with a racialised connota-tion that should be included in training data for abuse detec-tion systems. Furthermore, methodological improvementsmay help advance progress. Introducing fairness constraintsto model training has demonstrably helped mitigate the biasof cyber-bullying detection systems (Gencoglu 2020). Ad-versarial training can similarly assist by demoting the con-founds associated with texts of marginalised groups (Xia,Field, and Tsvetkov 2020).
Health
The drive towards equitable outcomes in healthcare entailsa set of unique challenges for marginalised communities.Queer communities have been disproportionately affectedby HIV (Singh et al. 2018), suffer a higher incidence ofsexually-transmitted infections, and are afflicted by elevatedrates of substance abuse (Wallace and Santacruz 2017).Compounding these issues, queer individuals frequently ex-perience difficulties accessing appropriate care (Bize et al.2011; Human Rights Watch 2018). Healthcare professionalsoften lack appropriate training to best respond to the needsof LGBTQ+ patients (Schneider, Silenzio, and Erickson-Schroth 2019). Even in situations where clinicians do havethe proper training, patients may be reluctant to reveal theirsexual orientation and gender identity, given past experi-ences with discrimination and stigmatisation.In recent months, the COVID-19 pandemic has amplifiedhealth inequalities (Bowleg 2020; van Dorn, Cooney, andSabin 2020). Initial studies during the pandemic have foundthat LGBTQ+ patients are experiencing poorer self-reportedhealth compared to cisgendered heterosexual peers (O’Neill2020). The health burden of COVID-19 may be especiallysevere for queer people of colour, given the substantial bar- riers they face in accessing healthcare (Hsieh and Ruther2016).
Promise
To this day, the prevalence of HIV among thequeer community remains a major challenge. Introducingsystems that both reduce the transmission risk and improvecare delivery for HIV+ patients will play a critical role inimproving health outcomes for queer individuals.Machine learning presents key opportunities to augmentmedical treatment decisions (Bisaso et al. 2017). For ex-ample, AI may be productively applied to identify the pa-tients most likely to benefit from pre-exposure prophylaxisfor HIV. A research team recently developed such a sys-tem, which correctly identified 38.6% of future cases ofHIV (Marcus et al. 2019). The researchers noted substan-tial challenges: model sensitivity on the validation set was46.4% for men and 0% for women, highlighting the im-portance of intersectionality for fair outcomes in healthcare.Machine learning has also been used to predict early viro-logical suppression (Bisaso et al. 2018), adherence to anti-retroviral therapy (Semerdjian et al. 2018), and individualrisk of complications such as chronic kidney disease (Rothet al. 2020) or antiretroviral therapy-induced mitochondrialtoxicity (Lee et al. 2019).
Risks
Recent advances in AI in healthcare may lead towidespread increases in welfare. Yet there is a risk that ben-efits will be unequally distributed—and an additional riskthat queer people’s needs will not be properly met by thedesign of current systems. Information about sexual orienta-tion and gender identity is frequently absent from researchdatasets. To mitigate the privacy risk for patients and pre-vent reidentification, HIV status and substance abuse arealso routinely omitted from published data. While such prac-tices may be necessary, it is worth recognising the importantdownstream consequences they have for AI system devel-opment in healthcare. It can become impossible to assessfairness and model performance across the omitted dimen-sions. Moreover, the unobserved data increase the likelihoodof reduced predictive performance (since the features aredropped), which itself results in worse health outcomes. Thecoupled risk of a decrease in performance and an inabilityto measure it could drastically limit the benefits from AI inhealthcare for the queer community, relative to cisgenderedheterosexual patients. To prevent the amplification of exist-ing inequities, there is a critical need for targeted fairnessresearch examining the impacts of AI systems in healthcarefor queer people.To help assess the quality of care provided to LGBTQ+patients, there have been efforts aimed at approximatelyidentifying sexual orientation (Bjarnadottir et al. 2019) andgender identity (Ehrenfeld et al. 2019) from clinical noteswithin electronic health record systems. While well inten-tioned, these machine learning models offer no guaran-tee that they will only identify patients who have explic-itly disclosed their identities to their healthcare providers.These models thus introduce the risk that patients will beouted without their consent. Similar risks arise from mod-els developed to rapidly identify HIV-related social mediadata (Young, Yu, and Wang 2017).he risk presented by AI healthcare systems could po-tentially intensify during medical gender transitions. Thereare known adverse effects associated with transition treat-ment (Moore, Wisniewski, and Dobs 2003). The activeinvolvement of medical professionals with experience incross-sex hormonal therapy is vital for ensuring the safetyof trans people undergoing hormone therapy or surgery.Since cisgendered individuals provide the majority ofanonymised patient data used to develop AI systems forpersonalised healthcare, there will be comparatively fewercases of trans patients experiencing many medical condi-tions. This scarcity could have an adverse impact on modelperformance—there will be an insufficient accounting forthe interactions between the hormonal treatment, its adverseeffects and potential comorbidities, and other health issuespotentially experienced by trans patients.Framing fairness as a purely technical problem that canbe addressed by the mere inclusion of more data or com-putational adjustments is ethically problematic, especiallyin high-stakes domains like healthcare (McCradden et al.2020). Selection bias and confounding in retrospective datamake causal inference particularly hard in this domain.Counterfactual reasoning may prove key for safely planninginterventions aimed at improving health outcomes (Prosperiet al. 2020). It is critical for fairness researchers to engagedeeply with both clinicians and patients to ensure that theirneeds are met and AI systems in healthcare are developedand deployed safely and fairly.
Mental Health
Queer people are more susceptible to mental health prob-lems than their heterosexual and cisgender peers, largelyas a consequence of the chronically high levels of stressassociated with prejudice, stigmatisation and discrimina-tion (Meyer 1995, 2003; Mays and Cochran 2001; Tebbeand Moradi 2016). As a result, queer communities experi-ence substantial levels of anxiety, depression and suicidalideation (Mental Health Foundation 2020). Compoundingthese issues, queer people often find it more difficult to askfor help and articulate their distress (McDermott 2015) andface systemic barriers to treatment (Romanelli and Hudson2017). A recent LGBTQ+ mental health survey highlightedthe shocking extent of issues permeating queer communi-ties (The Trevor Project 2020): 40% of LGBTQ+ respon-dents seriously considered attempting suicide in the pasttwelve months, with more than half of transgender and non-binary youth having seriously considered suicide; 68% ofLGBTQ+ youth reported symptoms of generalised anxietydisorder in the past two weeks, including more than threein four transgender and nonbinary youth; 48% of LGBTQ+youth reported engaging in self-harm in the past twelvemonths, including over 60% of transgender and nonbinaryyouth.
Promise
AI systems have the potential to help address thealarming prevalence of suicide in the queer community. Nat-ural language processing could be leveraged to predict sui-cide risk based on traditional data sources (such as ques-tionnaires and recorded interactions with mental health sup- port workers) or new data sources (including social me-dia and engagement data). The Trevor Project, a prominentAmerican organisation providing crisis intervention and sui-cide prevention services to LGBTQ+ youth (The TrevorProject n.d.), is one organisation working on such an ini-tiative. In partnership with Google.org and its research fel-lows, The Trevor Project developed an AI system to iden-tify and prioritise community members at high risk while si-multaneously increasing outreach to new contacts. The sys-tem was designed to relate different types of intake-formresponses to downstream diagnosis risk levels. A separategroup of researchers developed a language processing sys-tem (Liang et al. 2019) to identify help-seeking conversa-tions on LGBTQ+ support forums, with the aim of helpingat-risk individuals manage and overcome their issues.In other healthcare contexts, reinforcement learning hasrecently demonstrated potential in steering behavioural in-terventions (Yom-Tov et al. 2017) and improving health out-comes. Reinforcement learning represents a natural frame-work for personalised health interventions, since it can beset up to maximise long-term physical and mental well-being (Tabatabaei, Hoogendoorn, and van Halteren 2018).If equipped with natural language capabilities, such systemsmight be able to act as personalised mental health assistantsempowered to support mental health and escalate situationsto human experts in concerning situations.
Risks
Substantial risks accompany these applications.Overall, research on any intervention-directed systemsshould be undertaken in partnership with trained mentalhealth professionals and organisations, given the consider-able risks associated with misdiagnosing mental illness (cf.Suite et al. 2007) and exacerbating the vulnerability of thoseexperiencing distress.The automation of intervention decisions and mentalhealth diagnoses poses a marked risk for the trans com-munity. In most countries, patients must be diagnosed withgender dysphoria—an extensive process with lengthy waittimes—before receiving treatments such as hormone therapyor surgery (e.g., National Health Service n.d.). During thisprocess, many transgender individuals experience mistrustand invalidation of their identities from medical profession-als who withhold treatment based on rigid or discriminatoryview of gender (Ashley 2019). Automating the diagnosis ofgender dysphoria may recapitulate these biases and deprivemany transgender patients of access to care.Mental health information is private and sensitive. WhileAI systems have the potential to aid mental health workersin identifying at-risk individuals and those who would mostlikely benefit from intervention, such models may be mis-used in ways that expose the very people they were designedto support. Such systems could also lead queer communitiesto be shut out from employment opportunities or to receivehigher health insurance premiums. Furthermore, reinforce-ment learning systems for behavioural interventions willpresent risks to patients unless many open problems in thefield can be resolved, such as safe exploration (Hans et al.2008) and reward specification (Krakovna et al. 2020). Thedevelopment of safe intervention systems that support theental health of the queer community is likely also contin-gent on furthering frameworks for sequential fairness (Hei-dari and Krause 2018), to fully account for challenges inmeasuring and promoting queer ML fairness.
Employment
Queer people often face discrimination both during the hir-ing process (resulting in reduced job opportunities) and oncehired and employed (interfering with engagement, devel-opment and well-being; Sears and Mallory 2011). Non-discrimination laws and practices have had a disparate im-pact across different communities. Employment nondis-crimination acts in the United States have led to an aver-age increase in the hourly wages of gay men by 2.7% and adecrease in employment of lesbian women by 1.7% (Burn2018), suggesting that the impact of AI on employmentshould be examined through an intersectional lens.
Promise
To effectively develop AI systems for hiring, re-searchers must first attempt to formalise a model of the hir-ing process. Formalising such models may make it easier toinspect current practices and identify opportunities for re-moving existing biases. Incorporating AI into employmentdecision processes could potentially prove beneficial if un-biased systems are developed (Houser 2019), though thisseems difficult at the present moment and carries seriousrisks.
Risks
Machine learning-based decision making systems(e.g., candidate prioritisation systems) developed using his-torical data could assign lower scores to queer candidates,purely based on historical biases. Prior research has demon-strated that resumes containing items associated with queer-ness are scored significantly lower by human graders thanthe same resumes with such items removed (LeCroy andRodefer 2019). These patterns can be trivially learned andreproduced by resume-parsing machine learning models.A combination of tools aimed at social media scraping,linguistic analysis, and an analysis of interests and activi-ties could indirectly infringe of candidates’ privacy by out-ing them to their prospective employers without their priorconsent. The interest in these tools stems from the commu-nity’s emphasis on big data approaches, not all of which willhave been scientifically verified from the perspective of im-pact on marginalised groups.Both hiring and subsequent employment are multi-stageprocesses of considerable complexity, wherein technical AItools may be used across multiple stages. Researchers willnot design and develop truly fair AI systems by merely fo-cusing on metrics of subsystems in the process, abstractingaway the social context of their application and their inter-dependence. It is instead necessary to see these as sociotech-nical systems and evaluate them as such (Selbst et al. 2019).
Sources of Unobserved Characteristics
Most algorithmic fairness studies have made progressbecause of their focus on observed characteristics—commonly, race and legal gender. To be included in train-ing or evaluation data for an algorithm, an attribute must be measured and recorded. Many widely available datasets thusfocus on immutable characteristics (such as ethnic group)or characteristics which are recorded and regulated by gov-ernments (such as legal gender, monetary income or profes-sion).In contrast, characteristics like sexual orientation and gen-der identity are frequently unobserved (Andrus et al. 2020;Crocker, Major, and Steele 1998; Jacobs and Wallach 2019).Multiple factors contribute to this lack of data. In somecases, the plan for data collection fails to incorporate ques-tions on sexual orientation and gender identity—potentiallybecause the data collector did not consider or realise thatthey are important attributes to record (Herek et al. 1991).As a result, researchers may inherit datasets where assess-ment of sexual orientation and gender identity is logisticallyexcluded . In other situations, regardless of the surveyor’sintent, the collection of certain personal data may threatenan individual’s privacy or their safety. Many countries havelegislation that actively discriminates against LGBTQ+ peo-ple (Human Rights Watch n.d.). Even in nations with hard-won protections for the queer community, cultural bias per-sists. To shield individuals from this bias and protect theirprivacy, governments may instate legal protections for sen-sitive data, including sexual orientation (European Commis-sion n.d.). As a result, such data may be ethically or legallyprecluded for researchers. Finally, as recognised by discur-sive theories of gender and sexuality, sexual orientation andgender identity are fluid cultural constructs that may changeover time and across social contexts (Butler 2011). Attemptsto categorise, label, and record such information may beinherently ill-posed (Hamidi, Scheuerman, and Branham2018). Thus, some characteristics are unobserved becausethey are fundamentally unmeasurable . These inconsisten-cies in awareness and measurability yield discrepancies andtension in how fairness is applied across different contexts(Bogen, Rieke, and Ahmed 2020).Race and ethnicity are not immune to these challenges.Race and ethnicity may be subject to legal observability is-sues in settings where race-based discrimination is a sen-sitive issue (e.g., hiring). Additionally, the definition ofracial and ethnic groups has fluctuated across time and place(Hanna et al. 2020). This is exemplified by the constructionof Hispanic identity in the United States and its inclusionon the National Census, as well as the exclusion of mul-tiracial individuals from many censuses until relatively re-cently (Mora 2014). Though we choose to focus our analysison queer identity, we note that the observability and measur-ability of race are also important topics (e.g., Scheuermanet al. 2020).
Areas for Future Research
The field of algorithmic fairness in machine learning israpidly expanding. To date, however, most studies have over-looked the implications of their work for queer people. Toinclude sexual orientation and gender identity in fairnessresearch, it will be necessary to explore new technical ap-proaches and evaluative frameworks. To prevent the risk ofAI systems harming the queer community—as well as othermarginalised groups whose defining features are similarlynobserved and unmeasurable—fairness research must beexpanded.
Expanding Fairness for Queer Identities
Machine learning models cannot be considered fair unlessthey explicitly factor in and account for fairness towards theLGBTQ+ community. To minimise the risks and harms toqueer people worldwide and avoid contributing to ongoingerasures of queer identity, researchers must propose solu-tions that explicitly account for fairness with respect to thequeer community.The intersectional nature of sexual orientation and gen-der identity (Parent, DeBlaere, and Moradi 2013) emergesas a recurring theme in our discussions of online abuse,health and employment. These identities cannot be under-stood without incorporating notions of economic and racialjustice. Deployed AI systems may pose divergent risks todifferent queer subcommunities; AI risks may vary betweengay, bisexual, lesbian, transgender and other groups. It istherefore important to apply an appropriate level of granular-ity to the analysis of fairness for algorithmic issues. Policiescan simultaneously improve the position of certain queergroups while adversely affecting others—highlighting theneed for an intersectional analysis of queer fairness.Demographic parity has been the focus of numerous MLfairness studies and seems to closely match people’s con-ceptions of fairness (Srivastava, Heidari, and Krause 2019).However, this idea is very hard to promote in the contextof queer ML fairness. Substantial challenges are posed bythe sensitivity of group membership information and its ab-sence from most research datasets, as well as the associatedouting risks associated with attempts to automatically de-rive such information from existing data (Bjarnadottir et al.2019; Ehrenfeld et al. 2019; Young, Yu, and Wang 2017).Consensually provided self-identification data, if and whenavailable, may only capture a fraction of the community.The resulting biased estimates of queer fairness may in-volve high levels of uncertainty (Ethayarajh 2020), thoughit may be possible to utilise unlabeled data for tighteningthe bounds (Ji, Smyth, and Steyvers 2020). While it is possi-ble to root the analysis in proxy groups (Gupta et al. 2018),there is a risk of incorporating harmful stereotypes in proxygroup definitions, potentially resulting in harms of represen-tation (Abbasi et al. 2019). Consequently, most ML fairnesssolutions developed with a specific notion of demographicparity in mind may be inappropriate for ensuring queer MLfairness.Individual (Dwork et al. 2012; Jung et al. 2019), counter-factual (Kusner et al. 2017), and contrastive (Chakraborti,Patra, and Noble 2020) fairness present alternative defini-tions and measurement frameworks that may prove usefulfor improving ML fairness for queer communities. However,more research is needed to overcome implementational chal-lenges for these frameworks and facilitate their adoption.A small body of work aims to address fairness for pro-tected groups when the collection of protected attributes islegally precluded (e.g., by privacy and other regulation). Ad-versarially Re-weighted Learning (Lahoti et al. 2020) aimsto address this issue by relying on measurable covariates of protected characteristics (e.g., zip code as a proxy for race).This approach aims to achieve intersectional fairness by op-timising group fairness between all computationally identi-fiable groups (Kearns et al. 2018; Kim, Reingold, and Roth-blum 2018). Distributionally robust optimisation representsan alternative method for preventing disparity amplification,bounding the worst-case risk over groups with unknowngroup membership by optimising the worst-case risk overan appropriate risk region (Hashimoto et al. 2018). Thesemethods have helped establish a link between robustness andfairness, and have drawn attention to the synergistic bene-fits of considering the relationship between fairness and MLgeneralisation (Creager, Jacobsen, and Zemel 2020). Otheradversarial approaches have also been proposed for improv-ing counterfactual fairness, and by operating in continuoussettings, have been shown to be a better fit for protected char-acteristics that are hard to enumerate (Grari, Lamprier, andDetyniecki 2020).Fairness mitigation methods have been shown to be vul-nerable to membership inference attacks where the infor-mation leak increases disproportionately for underprivilegedsubgroups (Chang and Shokri 2020). This further highlightsthe tension between privacy and fairness, a common themewhen considering the impact of AI systems of queer com-munities. It is important to recognise the need for fairnesssolutions to respect and maintain the privacy of queer in-dividuals and to be implemented in a way that minimisesthe associated reidentifiability risks. Differentially privatefair machine learning (Jagielski et al. 2019) could poten-tially provide such guarantees, simultaneously meeting therequirements of fairness, privacy and accuracy.Putting a greater emphasis on model explainability mayprove crucial for ensuring ethical and fair AI applicationsin cases when fairness metrics are hard or impossible to re-liably compute for queer communities. Understanding howAI systems operate may help identify harmful biases thatare likely to have adverse downstream consequences, even ifthese consequences are hard to quantify accurately. Even incases when queer fairness can be explicitly measured, thereis value in identifying which input features contribute themost to unfair model outcomes (Begley et al. 2020), in orderto better inform mitigation strategies.It is important to acknowledge the unquestionable cisnor-mativity of sex and gender categories traditionally used inthe AI research literature. The assumption of fixed, binarygenders fails to include and properly account for non-binaryidentities and trans people (Keyes 2018). Incorporating suchbiases in the early stages of AI system design poses a sub-stantial risk of harm to queer people. Moving forward, moreattention should be directed to address this lacuna.Creating more-equitable AI systems will prove impossi-ble without listening to those who are at greatest risk. There-fore, it is crucial for the AI community to involve morequeer voices in the development of AI systems, ML fair-ness, and ethics research (Poulsen, Fosch-Villaronga, andSøraa 2020). For example, the inclusion of queer perspec-tives might have prevented the development of natural lan-guage systems that inadvertently censor content which iswrongly flagged as abusive or inappropriate simply due to itsdjacency to queer culture, such as in the example of scoringdrag queen language as toxic. Researchers should make ef-forts to provide a safe space for LGBTQ+ individuals to ex-press their opinions and share their experiences. Queer in AIworkshops have recently been organised at the Neural Infor-mation Processing Systems conference (Agnew et al. 2018;Agnew, Bilenko, and Gontijo Lopes 2019) and the Interna-tional Conference on Machine Learning (John et al. 2020),providing a valuable opportunity for queer AI researchers tonetwork in a safe environment and discuss research at theintersection of AI and queer identity.
Fairness for Other Unobserved Characteristics
The queer community is not the only marginalised groupfor which group membership may be unobserved (Crocker,Major, and Steele 1998). Religion, disability status, andclass are additional examples where fairness is often chal-lenged by observability (Kattari, Olzman, and Hanna 2018;Sanchez and Schlossberg 2001). Critically, they may alsobenefit from developments or solutions within queer fairnessresearch. For example, in nations where individuals of cer-tain religious groups are persecuted or subjected to surveil-lance, privacy is an essential prerequisite for safety. Persecu-tion targeting religious communities may also include cen-sorship or manipulation of information (Cook 2017). Evenin nations where religious freedoms are legally protected, re-ligious minorities may be subjected to online abuse such ashate speech or fear-mongering stereotypes (Awan 2014).Although the nature of the discrimination is different,people with disabilities are also a frequent target of deroga-tory language on the internet, and are more likely to be ha-rassed, stalked or trolled online, often to the detriment oftheir mental health (Sherry 2019). Youth with disabilitiesmore frequently suffer from adverse mental health due tobullying, and people of all ages with physical disabilitiesare at higher risk for depression (King et al. 2018; Turnerand Noh 1988). Therefore, individuals with disabilities maybenefit from insights on the interaction of unobserved char-acteristics and mental health. Lower-income and lower-classindividuals also suffer from worse mental health, particu-larly in countries with high economic inequality (Liu and Ali2008). Fairness for class and socioeconomic status is also animportant consideration for employment, where class bias inhiring limits employee diversity and may prevent economicmobility (Kraus et al. 2019).Any particular dataset or AI application may instanti-ate observability difficulties with respect to multiple demo-graphics. This may frequently be the case for disability sta-tus and class, for example. Individual fairness—a set of ap-proaches based on the notion of treating similar individu-als similarly (Dwork et al. 2012; Jung et al. 2019)—couldpotentially promote fairness across multiple demographics.These approaches entail a handful of challenges, however.The unobserved group memberships cannot be incorporatedin the similarity measure. As a result, the similarity mea-sure used for assessing individual fairness must be designedcarefully. To optimise fairness across multiple demograph-ics and better capture the similarity between people on afine-grained level, similarity measures will likely need to incorporate a large number of proxy features. This wouldbe a marked divergence from the proposed measures inmost published work. Counterfactual and contrastive fair-ness metrics come with their own set of practical implemen-tation challenges.On the other hand, the approaches aimed at providingworst-case fairness guarantees for groups with unknowngroup membership (Hashimoto et al. 2018; Lahoti et al.2020) apply by definition to any marginalised group. Theyare also specifically tailored to address the situation of un-observed protected characteristics. Therefore, fairness solu-tions required to address queer ML fairness are likely to beapplicable to other groups as well.Fairness challenges are institutionally and contextuallygrounded, and it is important to go beyond purely com-putational approaches to fully assess the sociotechnical as-pects of the technology being deployed. The complexity ofthese issues preclude any single group from tackling themin their entirety, and a resolution would ultimately requirean ecosystem involving a multitude of partnering organisa-tions, jointly monitoring, measuring and reporting fairnessof such systems (Veale and Binns 2017).These issues are only a small sample of the common chal-lenges faced by groups with typically unobserved charac-teristics. We invite future work to explore the impact of AIfrom the perspective of such groups. It is important to ac-knowledge that people with different identities have distinctexperiences of marginalisation, stigmatisation and discrimi-nation. However, recognising common patterns of injusticewill likely enable the development of techniques that cantransfer across communities and enhance fairness for multi-ple groups. In this way, shared ethical and technical designprinciples for AI fairness will hopefully result in a more eq-uitable future.
Conclusion
The queer community has surmounted numerous historicalchallenges and continues to resist oppression in physical anddigital spaces around the world. Advances in artificial intel-ligence represent both a potential aid to this resistance anda risk of exacerbating existing inequalities. This risk shouldmotivate researchers to design and develop AI systems withfairness for queer identities in mind. Systems that attemptto label sexual orientation and gender identity, even for thepurpose of fairness, raise technical and ethical challengesregarding observability and measurability.A new discourse on queer fairness has the potential toidentify moral and practical considerations shared acrossqueer communities, as well as concerns specific to partic-ular subpopulations in particular places. By further develop-ing techniques supporting fairness for unobserved character-istics, the machine learning community can support queercommunities and other marginalised groups. Broadly, thepresent work—surveying the ways in which AI may ame-liorate or exacerbate issues faced by queer communities—emphasises the need for machine learning practitioners todesign systems with fairness and dignity in mind. isclaimer
Any opinions presented in this paper represent the personalviews of the authors and do not necessarily reflect the officialpolicies or positions of their organisations.
References
Abbasi, M.; Friedler, S. A.; Scheidegger, C.; and Venkata-subramanian, S. 2019. Fairness in representation: Quantify-ing stereotyping as a representational harm. In
Proceedingsof the 2019 SIAM International Conference on Data Mining ,801–809. SIAM.Abreu, R. L.; and Kenny, M. C. 2018. Cyberbullying andLGBTQ youth: A systematic literature review and recom-mendations for prevention and intervention.
Journal ofChild & Adolescent Trauma
IEEE Security &Privacy arXiv preprint arXiv:2011.02282
Journal of Medi-cal Ethics
Policy & Internet
Cultural Diversityand Ethnic Minority Psychology
FirstMonday
Principles ofBiomedical Ethics . Oxford University Press, USA.Begley, T.; Schwedes, T.; Frye, C.; and Feige, I. 2020.Explainability for fair machine learning. arXiv preprintarXiv:2010.07389
Conference on Fairness, Account-ability and Transparency , 149–159.Bisaso, K. R.; Anguzu, G. T.; Karungi, S. A.; Kiragga, A.;and Castelnuovo, B. 2017. A survey of machine learningapplications in HIV clinical research and care.
Computersin Biology and Medicine
91: 366–371.Bisaso, K. R.; Karungi, S. A.; Kiragga, A.; Mukonzo, J. K.;and Castelnuovo, B. 2018. A comparative study of logis-tic regression based machine learning techniques for predic-tion of early virological suppression in antiretroviral initiat-ing HIV patients.
BMC Medical Informatics and DecisionMaking
Revue Medicale Suisse
CIN: Computers, Informatics, Nursing
Proceedings of the 2020 Conference onFairness, Accountability, and Transparency , 492–500.Bolukbasi, T.; Chang, K.-W.; Zou, J. Y.; Saligrama, V.; andKalai, A. T. 2016. Man is to computer programmer aswoman is to homemaker? Debiasing word embeddings. In
Advances in Neural Information Processing Systems , 4349–4357.Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.;McMahan, H. B.; Patel, S.; Ramage, D.; Segal, A.; andSeth, K. 2017. Practical secure aggregation for privacy-preserving machine learning. In
Proceedings of the 2017ACM SIGSAC Conference on Computer and Communica-tions Security , 1175–1191.Bosia, M. J.; McEvoy, S. M.; and Rahman, M. 2020.
TheOxford Handbook of Global LGBT and Sexual Diversity Pol-itics . Oxford University Press.Bowleg, L. 2020. We’re not all in this together: On COVID-19, intersectionality, and structural inequality.
AmericanJournal of Public Health arXiv preprint arXiv:2005.14165
Conference on Fairness, Accountability andTransparency , 77–91.Burn, I. 2018. Not all laws are created equal: Legal dif-ferences in state non-discrimination laws and the impact ofLGBT employment protections.
Journal of Labor Research
Sexualities
Bodies That Matter: On the Discursive Lim-its of Sex . Taylor & Francis.Chakraborti, T.; Patra, A.; and Noble, J. A. 2020. Con-trastive fairness in machine learning.
IEEE Letters of theComputer Society arXiv preprint arXiv:2011.03731
Calif. L. Rev.
Journal of Gay & Lesbian Social Services
Conference on Fairness, Accountability andTransparency , 134–148.Cobbe, J. 2019. Algorithmic censorship on social platforms:Power, legitimacy, and resistance.
Legitimacy, and Resis-tance .Cook, S. 2017.
The Battle for China’s Spirit: Religious Re-vival, Repression, and Resistance under Xi Jinping . Row-man & Littlefield.Corbett-Davies, S.; and Goel, S. 2018. The measure andmismeasure of fairness: A critical review of fair machinelearning. arXiv preprint arXiv:1808.00023
Nature Machine Intelligence arXiv preprint arXiv:2010.07249 .Crocker, J.; Major, B.; and Steele, C. 1998. Social stigma.In Gilbert, D. T.; Fiske, S. T.; and Lindzey, G., eds.,
TheHandbook of Social Psychology
Hand-book of LGBT Communities, Crime, and Justice , 339–362.Springer.Delisle, L.; Kalaitzis, A.; Majewski, K.; de Berker, A.;Marin, M.; and Cornebise, J. 2019. A large-scale crowd-sourced analysis of abuse against women journalists andpoliticians on Twitter. arXiv preprint arXiv:1902.03093
Proceedings ofthe 3rd Innovations in Theoretical Computer Science Con-ference , 214–226.Dynes, W. R. 2014.
The Homophobic Mind . Lulu.com.Ehrenfeld, J. M.; Gottlieb, K. G.; Beach, L. B.; Monahan,S. E.; and Fabbri, D. 2019. Development of a natural lan-guage processing algorithm to identify and evaluate trans-gender patients in electronic health record systems.
Ethnic-ity & Disease arXiv preprint arXiv:2004.12332
Journalof Economic Behavior & Organization
Science arXiv preprint arXiv:2005.06625 arXiv preprintarXiv:2008.13122 arXiv preprint arXiv:1806.11212
Pro-ceedings of the 2018 CHI Conference on Human Factors inComputing Systems , 1–13.Hanna, A.; Denton, E.; Smart, A.; and Smith-Loud, J. 2020.Towards a critical race methodology in algorithmic fairness.In
Proceedings of the 2020 Conference on Fairness, Ac-countability, and Transparency , 501–512.Hans, A.; Schneegaß, D.; Sch¨afer, A. M.; and Udluft, S.2008. Safe exploration for reinforcement learning. In
ESANN , 143–148.Hashimoto, T.; Srivastava, M.; Namkoong, H.; and Liang, P.2018. Fairness without demographics in repeated loss min-imization. In
International Conference on Machine Learn-ing , 1929–1938. PMLR.Heidari, H.; and Krause, A. 2018. Preventing disparate treat-ment in sequential decision making. In
International JointConference on Artificial Intelligence , 2248–2254.Hendrycks, D.; Burns, C.; Basart, S.; Critch, A.; Li, J.; Song,D.; and Steinhardt, J. 2020a. Aligning AI with shared humanvalues. arXiv preprint arXiv:2008.02275 arXiv preprintarXiv:2009.03300
American Psychologist
Tex. J. Women & L.
Stan. Tech. L. Rev.
22: 290.Hsieh, N.; and Ruther, M. 2016. Sexual minority health andhealth risk factors: Intersection effects of gender, race, andsexual identity.
American Journal of Preventive Medicine arXiv preprint arXiv:1912.05511
International Conference on Ma-chine Learning , 3000–3008. PMLR.Jane, E. A. 2020. Online abuse and harassment.
The Interna-tional Encyclopedia of Gender, Media, and Communication , 1895–1912.Ji, D.; Smyth, P.; and Steyvers, M. 2020. Can I trust myfairness metric? Assessing fairness with unlabeled data andBayesian inference.
Advances in Neural Information Pro-cessing Systems arXiv preprint arXiv:1905.10660
Affilia
International Conference on MachineLearning , 2564–2572. PMLR.Keyes, O. 2018. The misgendering machines: Trans/HCIimplications of automatic gender recognition.
Proceedingsof the ACM on Human-Computer Interaction
Proceedings of the ACM India Joint InternationalConference on Data Science and Management of Data , 342–345.Kim, M.; Reingold, O.; and Rothblum, G. 2018. Fairnessthrough computationally-bounded awareness.
Advances inNeural Information Processing Systems
31: 4842–4852.King, T.; Aitken, Z.; Milner, A.; Emerson, E.; Priest, N.;Karahalios, A.; Kavanagh, A.; and Blakely, T. 2018. Towhat extent is the association between disability and mentalhealth in adolescents mediated by bullying? A causal medi-ation analysis.
International Journal of Epidemiology
State-Sponsored Homophobia .Krakovna, V.; Uesato, J.; Mikulik, V.; Rahtz, M.; Everitt,T.; Kumar, R.; Kenton, Z.; Leike, J.; and Legg, S.2020. Specification gaming: The flip side of AI in-genuity. https://deepmind.com/blog/article/Specification-gamingthe-flip-side-of-AI-ingenuity.Kraus, M. W.; Torrez, B.; Park, J. W.; and Ghayebi, F. 2019.Evidence for the reproduction of social class in brief speech.
Proceedings of the National Academy of Sciences
Advances in Neural InformationProcessing Systems , 4066–4076.Lahoti, P.; Beutel, A.; Chen, J.; Lee, K.; Prost, F.; Thain,N.; Wang, X.; and Chi, E. H. 2020. Fairness without demo-graphics through adversarially reweighted learning. arXivpreprint arXiv:2006.13114
NorthAmerican Journal of Psychology
BMC Medical ResearchMethodology
International Confer-ence on Human-Computer Interaction , 345–355. Springer.Liu, W. M.; and Ali, S. R. 2008. Social class and classism:Understanding the psychological impact of poverty and in-equality.
Handbook of Counseling Psychology
4: 159–175.Liu, Y.; Zhang, W.; and Yu, N. 2017. Protecting privacy inshared photos via adversarial examples based stealth.
Secu-rity and Communication Networks
The Lancet HIV arXiv preprint arXiv:2005.07572
American Journal ofPublic Health
The Lancet DigitalHealth
Health
Journal of Health and Social Behavior
Psychological Bulletin
The Journal of Clini-cal Endocrinology & Metabolism
Making Hispanics: How activists, bu-reaucrats, and media constructed a new American . Univer-sity of Chicago Press.Nadal, K. L.; Issa, M.-A.; Leon, J.; Meterko, V.; Wideman,M.; and Wong, Y. 2011. Sexual orientation microaggres-sions: ‘Death by a thousand cuts’ for lesbian, gay, and bi-sexual youth.
Journal of LGBT Youth
Proceedings of the 2018 ACM SIGSAC Confer-ence on Computer and Communications Security
American Eugenics: Race, QueerAnatomy, and the Science of Nationalism . U of MinnesotaPress.ut in Tech. n.d. Out in Tech. https://outintech.com/. Ac-cessed: 2020-10-07.Parent, M. C.; DeBlaere, C.; and Moradi, B. 2013. Ap-proaches to research on intersectionality: Perspectives ongender, LGBT, and racial/ethnic identities.
Sex Roles arXiv preprintarXiv:1808.07231
Using supervised machine learning andsentiment analysis techniques to predict homophobia in Por-tuguese tweets . Ph.D. thesis, Fundac¸ ˜ao Getulio Vargas.Poulsen, A.; Fosch-Villaronga, E.; and Søraa, R. A. 2020.Queering machines.
Nature Machine Intelligence
Nature MachineIntelligence
American Journal of Or-thopsychiatry
Proceedings of theIEEE/CVF International Conference on Computer Vision ,1–11.Roth, J. A.; Radevski, G.; Marzolini, C.; Rauch, A.;G¨unthard, H. F.; Kouyos, R. D.; Fux, C. A.; Scherrer, A. U.;Calmy, A.; Cavassini, M.; et al. 2020. Cohort-derived ma-chine learning models for individual prediction of chronickidney disease in people living with Human Immunodefi-ciency Virus: A prospective multicenter cohort study.
TheJournal of Infectious Diseases .Saha, P.; Mathew, B.; Goyal, P.; and Mukherjee, A. 2019.HateMonitors: Language agnostic abuse detection in socialmedia. arXiv preprint arXiv:1909.12642
Passing: Iden-tity and Interpretation in Sexuality, Race, and Religion , vol-ume 29. NYU Press.Scheuerman, M. K.; Wade, K.; Lustig, C.; and Brubaker,J. R. 2020. How we’ve taught algorithms to see identity:Constructing race and gender in image databases for facialanalysis.
Proceedings of the ACM on Human-Computer In-teraction
Pro-ceedings of the Fifth International Workshop on NaturalLanguage Processing for Social Media , 1–10.Schneider, J. S.; Silenzio, V. M.; and Erickson-Schroth, L.2019.
The GLMA Handbook on LGBT Health . ABC-CLIO. Scicchitano, D. 2019. The ‘real’ Chechen man: Conceptionsof religion, nature, and gender and the persecution of sexualminorities in postwar Chechnya.
Journal of Homosexuality
Proceedings of the Conferenceon Fairness, Accountability, and Transparency , 59–68.Semerdjian, J.; Lykopoulos, K.; Maas, A.; Harrell, M.;Priest, J.; Eitz-Ferrer, P.; Wyand, C.; and Zolopa, A. 2018.Supervised machine learning to predict HIV outcomes usingelectronic health record and insurance claims data. In
AIDS2018 Conference .Sender, K. 2018. The gay market is dead, long live the gaymarket: From identity to algorithm in predicting consumerbehavior.
Advertising & Society Quarterly arXivpreprint arXiv:2005.00268
DisabilityHate Speech: Social, Cultural and Political Contexts .Singh, S.; Song, R.; Johnson, A. S.; McCray, E.; and Hall,H. I. 2018. HIV incidence, prevalence, and undiagnosed in-fections in US men who have sex with men.
Annals of In-ternal Medicine .Srivastava, M.; Heidari, H.; and Krause, A. 2019. Mathemat-ical notions vs. human perception of fairness: A descriptiveapproach to fairness for machine learning. In
Proceedings ofthe 25th ACM SIGKDD International Conference on Knowl-edge Discovery & Data Mining , 2459–2468.Strengers, Y.; Qu, L.; Xu, Q.; and Knibbe, J. 2020. Adher-ing, steering, and queering: Treatment of gender in naturallanguage generation. In
Proceedings of the 2020 CHI Con-ference on Human Factors in Computing Systems , 1–14.Suite, D. H.; La Bril, R.; Primm, A.; and Harrison-Ross, P.2007. Beyond misdiagnosis, misunderstanding and mistrust:Relevance of the historical perspective in the medical andmental health treatment of people of color.
Journal of theNational Medical Association
International Conference on Principles and Practice ofMulti-Agent Systems , 312–327. Springer.Tanash, R. S.; Chen, Z.; Thakur, T.; Wallach, D. S.; and Sub-ramanian, D. 2015. Known unknowns: An analysis of Twit-ter censorship in Turkey. In
Proceedings of the 14th ACMWorkshop on Privacy in the Electronic Society , 11–20.avits, M.; and P´erez, E. O. 2019. Language influences massopinion toward gender and LGBT equality.
Proceedings ofthe National Academy of Sciences
Journalof Counseling Psychology
Journal of Adolescence
Journal of Health andSocial Behavior
Lancet
Big Data & Society
Computational Intelligence and Neuroscience
LGBT Psychology and Mental Health: Emerging Researchand Advances
The Lancet Public Health
Journal of Personality and SocialPsychology
EnglishJournal
Disability and the LifeCourse: Global Perspectives arXiv preprintarXiv:2005.12246
Journal of Medical InternetResearch
Global Infor-mation Society Watch 2015: Sexual rights and the internet
Journal of AcquiredImmune Deficiency Syndromes