Beyond Our Behavior: The GDPR and Humanistic Personalization
aa r X i v : . [ c s . C Y ] A ug Beyond Our Behavior:The GDPR and Humanistic Personalized Recommendation
Travis Greene & Galit Shmueli [email protected],[email protected] of Service Science, National Tsing Hua UniversityHsinchu, Taiwan
ABSTRACT
Personalization should take the human person seriously. This re-quires a deeper understanding of how recommender systems canshape both our self-understanding and identity. We unpack keyEuropean humanistic and philosophical ideas underlying the Gen-eral Data Protection Regulation (GDPR) and propose a new par-adigm of humanistic personalization. Humanistic personalizationresponds to the IEEE’s call for Ethically Aligned Design (EAD) andis based on fundamental human capacities and values. Humanisticpersonalization focuses on narrative accuracy: the subjective fit be-tween a person’s self-narrative and both the input (personal data)and output of a recommender system. In doing so, we re-frame thedistinction between implicit and explicit data collection as one ofnonconscious (“organismic”) behavior and conscious (“reflective”)action. This distinction raises important ethical and interpretiveissues related to agency, self-understanding, and political partici-pation. Finally, we discuss how an emphasis on narrative accuracycan reduce opportunities for epistemic injustice done to data sub-jects.
Machine learning-backed personalized services, among them rec-ommender systems (RS), have become a permanent fixture in ourincreasingly digital lives. Personalization relies on vast quantitiesof behavioral big data (BBD): the personal data generated when hu-mans interact with apps, devices, and social networks [94]. Besidessupplying bases for recommendations and “hypernudges” [112] to-wards these recommendations, BBD are the essential raw materialof our digital representations. But to what extent do these loggeddigital behaviors truly reflect who we are as persons?As life steadily moves online, the digital representations of per-sons take on legal and moral importance [106]. Influential Euro-pean legal theorists and philosophers have even written an
OnlifeManifesto , shaping the discourse around what it means to be hu-man in the digital age [40]. At the same time, the IEEE has artic-ulated a vision of Ethically Aligned Design (EAD) that empowers“individuals to curate their identities and manage the ethical impli-cations of their data” [4].The current domain of RS is largely focused on problems of“practical identity,” which include various methods for construct-ing, identifying, and linking various data representations of users[27, 70]. In contrast, philosophers and social scientists are con-cerned with issues of moral, social, and increasingly, the narrativeidentities of persons. This separation of spheres of activity is no longer acceptable given the potential effects RS can have on theperson. We thus propose humanistic personalization (HP) as a novelguide for thinking about and designing personalized RS. The HPparadigm of mutual, dialogic participation obliges RS designers topromote and respect the unique capacities of persons to create andmodify personal narratives over time.Our paper builds on but differs from recent work on RS value-alignment [99], self-actualization [65], qualitative evaluation met-rics [76], and the epistemic and ethical problems of over-relianceon behavioral data [35].Most notably, we use the General Data Protection Regulation(GDPR) as the basis for ethical and legal norms instead of adding tothe current morass of competing principles, guidelines, and frame-works for Ethical AI/ML (see e.g., [79]). Tying ethical principlesto legal norms via the GDPR is valuable because, in principle, theGDPR applies to any data controller processing the personal dataof data subjects residing in the EU [52]. Further, law is institutionally-backed to achieve greater behavioral compliance via the state’s mo-nopoly on the legitimate use of physical force [18, 56]. Thus ourconception of HP is likely to foster greater compliance by industryand researchers around the globe, even if some minor details ofthe GDPR have not yet been fully worked-out. In sum, we believethe GDPR provides a solid grounding for deep humanistic designprinciples relevant to RS, a view shared by a number of Europeanlegal scholars [59, 60].Second, we re-frame [35]’s distinction between implicit/explicitfeedback as one between conscious action and non-conscious be-havior and perception. This has philosophical and moral implica-tions for RS users.Third, we cast doubt on the meaning of the structural regularityof said observed behavioral data, which are encoded and parame-terized by the machine learning (ML) algorithms used to generaterecommendations. We suggest these regularities may more reflectthe structure of platform affordances rather than a person’s under-lying preferences. In this way, debates in the philosophy of soci-ology (e.g., the structure-agency debate) may provide insights intoareas where personalization is currently weak. A final and absolutedetermination of person’s “true” preferences or interests may notbe possible–a conclusion which dovetails with postmodern theoryand complexity theory [58]. The upshot of this diagnosis is thatgiving data subjects a constructive voice in the meaning and inter-pretation of digitally recorded events may be our best option. Weintroduce the legal basis supporting such a conclusion.
Conventional collaborative filtering (CF)–the dominant paradigmin personalized recommendation–relies primarily on implicit data rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli collection to drive recommendations [23]. CF leverages implicitdata to determine a focal user’s “nearest neighbors” [16] and rec-ommends items bought or consumed by the “neighbors,” but notyet by the focal user. In many large-scale production systems, im-plicit data take the form of browsing history, rating metadata, searchqueries, and social graphs. Implicit data collection relieves users ofthe task of explicitly rating items and can be easily automated andscaled [80]. Implicit data are a type of BBD.During personalization, a user is represented roughly as follows.Database representations of a person–tabularized collections oflogged behaviors in apps or on devices–are converted into featurevectors, permitting the computation of various proximity metricsbetween pairs of vectors. Next, “neighborhoods” are derived fromthe set of neighbors in feature space deemed closest to a given user,according to a distance metric. A 10-dimensional feature vectorrepresents a person as an array of 10 numbers, obtained from mea-surements of observed behavior, thereby replacing the person witha single point in 10-dimensional feature space. To reduce compu-tational costs, the dimensions of the original user-item matrix areoften reduced into a lower “latent space” by techniques such assingular value decomposition (SVD). Recently, deep learning ap-proaches to RS have used stacked denoising autoencoders to gen-erate latent representations of users [109]. Digital representationsof persons thus take the form of feature sets of narrow, observedbehaviors within an app or device and exclude their unique mentalstates, social and moral identities and personal narratives. The behavioral focus of CF personalization exposes it to the samecriticisms of behaviorism. Behaviorism rejects the study of causal cognitive mechanisms of human behavior (beliefs, intentions, goals,values) in favor of measurable features of the behavioral environ-ment [98]. The behaviorist model of the human person (abstractedto an “organism”) is inherently at odds with that of the GDPR. BBD-based models are what a humanist might describe as a “caricature”:a model not only overly simplistic and approximate, but which ac-tively distorts reality [49].In the case of CFs, behaviors measured are self-interestedly cho-sen by BBD platforms (e.g., Facebook or Google) and often drivenby business–not scientific or ethical–goals, such as click throughrate, engagement, or conversion. Further, the ability to manipulatedigital environments at the individual level– hypernudges –obscuresreliable interpretation of observed behaviors. This problem is par-ticularly relevant for RS using online learning , such as deep rein-forcement learning [114].Philosophers of science have long understood that measurementitself is a form of representation. As Bas Van Fraassen notes, “ameasurement outcome does not display what the measured entityis like, but what it ’looks like’ in the measurement set-up” [107].The act of measuring something “locates it in an ordered spaceof possible measurement outcomes,” where this ordered space isbuilt into the measuring instrument [6]. Likewise, observationsproduced by instrumentation (e.g., phones or apps) are inherentlyperspectival: they are sensitive only to a particular type of inputand are never perfectly transparent [50]. Behavioral data are more precisely capta , or the “units that have been selected and harvestedfrom the sum of all potential data” [64].Feature sets of narrowly defined and observed (logged) behav-iors within an app or device constitute our digital representations.Yet the meaning of observed behaviors are under-determined sincethey could spring from a variety of possible mental states. By leav-ing out mental states, social and moral identity, and narrative as-pects of human experience, BBD-based models of persons are rep-resentations merely by “stipulative fiat” [17] of RS designers, notbecause they capture essential aspects of a person.
The act of inferring personal interests through digital behavior–the ostensible goal of personalization–leads to deep philosophicalquestions. For instance a) What kind of objects are our data rep-resentations? and b) What is the relation between parts (measure-ments) and wholes (persons), especially when a global objectivefunction is optimized?Two influential accounts of how persons and personal data re-late exist in the philosophy of information and marketing litera-ture. Yet we find both views to be legally and morally problematic.Our purpose in introducing these two points of view is not to at-tempt to hash out their metaphysical details, but instead to intro-duce readers to a new way of thinking about the complexities oftechnologically-mediated being. We describe each view next.
You (Really) are Your Data.
The Oxford philosopher of informationLuciano [39] endorses what we term a “realist” view of personaldata. He maintains, “You are your information,’ so that anythingdone to your information is done to you, not to your belongings.”To Floridi, personal data do not represent you, but rather constituteyou as you : your personal identity and your personal data are func-tionally equivalent, though at different levels of abstraction . Floridicontends that the GDPR’s rights to informational privacy protectthe constitution of one’s identity. If data processors alter or deletepersonal data about you, they also alter or delete aspects of yourpersonal identity.Floridi’s view confers strong privacy safeguards for data subjects–Facebook’s Emotional Contagion experiment [66] would be tan-tamount to psychological abuse. But Floridi’s view falls short forother reasons. Persons have rights, personal data do not.
Data canbe nearly infinitely cloned and copied, persons cannot. Further, ifFloridi is right, identity theft would be akin to literal kidnapping[96]. Moreover, Floridi’s view overstates the importance of howothers classify us. Our social identities indeed depend on how oth-ers label us, but we are capable of choosing to reject this label.Lastly, Floridi does not provide any account of why certain kindsof personal data are more sensitive than others.
Dividuals: Persons or Posthumans?
Postmodern marketing researchersand cultural theorists take a different approach. For them, databasesand now RS construct dividuals : digital entities derived from massesof individuals’ personal data [21]. Dividuals are, in the words ofpoststructuralist philosopher Gilles Deleuze, “cybernetic subject[s]made up of data points, codes, and passwords” [29]. Matrix de-composition techniques, such as SVD, combine and re-assemble eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, personal data to birth new marketable entities separate from thepersons from whom they were derived. The resulting posthuman assemblages are variously termed data doubles , capta shadows and digital personas [64].This interpretation of digital identity is also problematic. Un-der the GDPR, only natural living persons have rights –dividualsare outside the scope of the GDPR. Facebook’s above-mentionedstudy would be legally and ethically kosher. Second, personaliza-tion does not divide up individual persons, but rather their digitalrepresentations. Unlike human individuals, nothing about thesedigital representations demands they remain unified wholes. Fi-nally, recommendations must ultimately be linked to actual livingpersons through unique identifiers, but it is unclear how dividualsrelate to the persons from whom they were derived. While the di-vidual is an interesting metaphor, it has limited applicability in thereal world.The remainder of this paper is structured as follows. First, weconsider and critique the various epistemological and ethical blindspotsof many current BBD-based RS based on ML algorithms (e.g., CFand reinforcement learning). Next, we lay out the legal and philo-sophical groundwork underlying the GDPR. We argue the con-ceptual seeds of humanistic personalization are rooted in the le-gal principles of a right to personality and in informational self-determination. We then connect these principles to the ideas of keyEnlightenment and Romanticist thinkers, and derive some insightsfor RS design. Following that, we respond to the IEEE’s call forEAD by introducing the notion of narrative accuracy and its com-plement, epistemic injustice [44]. The goal of narrative accuracyis to align predictions and explanations of RS with one’s personalnarrative. Finally, we consider potential dangers in emphasizinghuman subjectivity through narrative and discuss possible com-promises. Ultimately, we hope this work contributes to the concep-tual foundations of participatory design of recommender systems[7, 35], and third paradigm HCI [55]. We currently reside in what technologist Eli Pariser calls the “un-canny valley of personalization” [82, pg. 65]. Users know recom-mendations derive from our digital representations, but are oftennot quite sure how or why. Escaping this “uncanny valley” requiresnew, reflective perspectives on the design and impact of AI/ML.Our notion of HP envisions a shift from an implicit, behavior-basedrepresentation paradigm, dominated by our “organismic” interests,to one centered on conscious, explicit feedback [35] through thenotion of dialogic narrative construction.Driving our discussion is a distinction we make between con-scious action and nonconscious “stimulus-response” behavior. Thedifference is essentially between “what an agent does” and “whatmerely happens to him” [42]. Philosophers have traditionally asso-ciated intentional action with conscious, rational beings and merebehavior with non-conscious states. Importantly for our argumentshere, narrative identity presupposes conscious reflection and re-counting of events. We realize there is a lot of grey area, but webelieve our contentions to be supported by the emerging literaturein cognitive psychology. Our main point in this section is simple. We assert that it isnot that case that current RS personalization is so incredibly ac-curate, but that we, as linguistic, social, and physically-embodiedanimals, have deceived ourselves as to our potential for free move-ment and thought. In reality, much of what we do is conditioned bythe perceived features of our social, physical, and now digital en-vironments. This idea may seem obvious now, but it took philoso-phers centuries to overturn the Cartesian dualism between mindand body and realize the boundaries between subject and objectwere not clear and distinct. Our bodies, for instance, shape ourperceptions of the world and our choice of linguistic metaphors[67].On top of this, the increasing ubiquity of digital environmentsfurther limits and constrains what is humanly possible. Perhapseven worse, as degrees of freedom in digital environments are re-duced, hermeneutic problems of human action arise and presentethical problems. For one, the behaviorist assumptions behind thecollection of implicit data fail to appreciate the important caveatthat “variable acts produce a constant result” [87]. When complexbehaviors are broken down into overly-narrow “sub-symbolic” cat-egories by BBD platforms and RS designers, intentions become de-coupled from results. What is more, a clear one-to-one mappingof intentions to actions becomes impossible. One cannot intend todo what one cannot first identify. Misinterpretation appears bakedinto digital life and is worsened when automated systems can dy-namically change digital environments in real-time.
More recently, neuroscientists and psychologists have studied howour brains and perceptual systems evolved to make quick and rela-tively accurate assessments of our social and natural environments,resulting in the nonconscious guidance of mental processes involvedin interpersonal behavior and goal pursuit [8, 57, pg. 44]. So al-though the behaviorist paradigm in psychology is no longer dom-inant, it was only partly misguided. Much of what we do is in-deed determined by perceived environmental structures (e.g., af-fordances) and evolutionary drives, many of which operate non-consciously.According to [46], ideomotor actions are “movements or behavioursthat are unconsciously initiated, usually without an accompanyingsense of conscious control.” They are actions (in our sense, behav-iors), typically triggered by environmental cues, that express in-formation not consciously accessed. These researchers speculatethat “ideomotor actions, especially visually-guided ones, may re-flect the operation of an ‘inner zombie’–a concurrent nonconscioussystem expressed primarily via motor action” [46]. [36] explainsfurther, “After people have been implicitly primed with cues closelylinked in their memory with striving toward an end-point, they dis-play behavior that meets classical criteria of motivation, such aspersistence and resumption after an interruption.” In other words, Hermeneutics dealt with the interpretation of biblical texts, but was revitalized as aphilosophical method of literary analysis in the 20th century. We note that the English word “category” traces back to the Greek kategoreisthai ,which means “publicly accuse.” [13, pg. 18]3 rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli it can be impossible to tell from observation alone whether move-ments are conscious actions or nonconscious behavior.Yet new technology allows for the capture of increasingly fine-grained movements. As one example of how nonconscious behav-ior can potentially be leveraged by a RS, [19] describes a patentfor software correlating nonconscious elements of behavior withusers’ demographic characteristics. By analyzing the distinctivepatterns of mouse trajectories, for instance, inferences about non-conscious activity can be drawn. The patent demonstrates how thiscould be used to personalize the presentation of website content:As individuals continue to become more accustomedto using digital devices, and activities hosted on dig-ital devices, the concept of tracking an individual’snonconscious behaviors using the digital devices be-comes increasingly convenient... [a] web server canprepare in real time what the next presentation andinterface should be in order to capture more of theviewer’s attention by presenting the web content inmodalities which the viewer has nonconsciously se-lected. Thus, content is directed via a Smart web serverto a viewer based on the viewer’s nonconscious selec-tion.On the flip side, if nonconscious behaviors are regular enough tobe detected by ML techniques, then this suggests we might use MLto actively discount or re-weight behaviors in favor of conscious,intentional action.Moving now to the field of information retrieval, a major as-sumption is that the user is driven by an “information need.” Evenwhen updated using Broder’s “trichotomy” of web search types[14] to include informational, navigational, and transactional searches,we still do not do justice to the complexities of human meaning andcontext. Such a model of human behavior abstracts a person to thelevel of its organismic interests. It is a caricature.From a humanistic point of view, such research typically fails todistinguish between conscious and nonconscious “interests” and“goals.” As one example, [53] analyze mouse trajectories and scrollpatterns for the “intent to buy.” Researchers also typically overlookkey ethical implications of pre-defined “intent” categories, for ex-ample, that intentions are assumed to either fall under “research”or “buy.” For a philosopher, there is an important difference be-tween behavior that is more or less evolutionarily hard-wired–essentially stimulus-response–and what is the result of deliberate,reasoned reflection on one’s personal values. Law also recognizesa moral difference between the two.Humans are more than mere information-seeking organisms. Aweb query such as “how can I get a six pack in 30 days” and “howcan I help end human suffering” are both informational searchtypes driven by an assumed information need. Yet anyone but themost die-hard positivist would argue these observed queries (treatedas bags of words) are qualitatively equivalent. Personalization shouldtake the person seriously. If we recognize the human capacity formoral reflection, we have to treat the second search as inherentlymore meaningful and worthy of pursuit than the first. The ques-tion is, how might we design RS to promote this kind of reflectiveendorsement of our desires?
Although controversial, behavior modification methods have beenused by major BBD platforms to control digital environments andshape behavior towards “guaranteed outcomes” [115, pg. 201]. De-rived from principles of behaviorist psychology, they are premisedon the idea that “once you understand the environmental eventsthat cause behaviors to occur, you can change the events in theenvironment to alter behavior” [77, pg. 3]. A subset of these tech-niques are the so-called “dark patterns” of design. Dark patternsare cases where designers use their knowledge of human behaviorto implement deceptive functionality that is not in the user’s bestinterest [51]. We note that no distinction is made between “organ-ismic” and “reflective” interests here.Many of these dark patterns stem from the ideas of influentialdesign gurus, such as BJ Fogg and Nir Eyal, who are candid aboutthe ways in which products such as RS are “persuasive technolo-gies,” based on “choice architectures” aimed at getting us “hooked.”These persuasive techniques include reduction, tailoring, tunnel-ing, suggestion, self-monitoring, surveillance, and conditioning [51].Some dark patterns are especially relevant to RS design, suchas interface interference and forced action [51]. Typically, the ethicsof dark patterns are seen from the designer’s point of view. Therethe question is, “Is it right to design this way?” We suggest thereis another equally interesting way to view persuasive technologyand dark patterns–from the user’s point of view. There the ques-tion is: “If I didn’t consciously choose this, does this ‘choice’ reflectanything deep about who I am?” Computer scientist Paul Dourish details how the positivist, Carte-sian paradigm of clear and distinct separations of subject and ob-ject, knower and knowledge, dominate computer system design[34]. In the philosophically-informed version of HCI he advocates,“our experience of the world is intimately tied to the ways in whichwe act in it” [34, pg. 18]. On this view, questions about the “true”boundaries of subject/object and system/environment are inher-ently irresolvable. Dourish also highlights a key aspect betweenwhat [35] call the intention/behavior gap. [34, pg. 137] explains:When I click on the Buy now? button on a Web page,what matters to me is not that a database record hasbeen updated... but rather that, in a day or two, some-one will turn up at my door carrying a copy of thebook for me. I act through the computer system. Inturn, this takes us to another element of the relation-ship between embodied interaction and intentional-ity.RS research that accounts for this problem of one event havingtwo potential meanings is hard to find. In their work on contextaware recommender systems (CARS), [1] paradoxically claim theyrely on a “representational” approach and cite Dourish’s work. YetDourish’s paper is aimed at explaining the ways in which “pos-itivist/representational” thinking behind ubiquitous computing is RS can be seen as metaproducts: products which recommend other products.4 eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, shortsighted and incomplete. He instead suggests drawing on “phe-nomenological” and “embodied” approaches to understanding con-text and its role in meaning creation. Dourish, however, does notconsider legal norms in his discussion of how designers can be“nudged” to include these more nuanced, post-positivist concep-tions of context into RS. We claim the GDPR provides grounds fordoing so, irrespective of the underlying economic or business logic.
The philosopher Paul Ricoeur connects language and behavior ina unique way relevant to BBD and RS, and social science more gen-erally. He holds that we can view behavior as text and thus applyhermeneutic methods to its interpretation, much as we might to abook or speech [88]. He calls this aspect of text distanciation . Dis-tanciation breaks the link between writing and its original context(and its intended audience), granting the text semantic autonomywhile also creating new problems of reference. Ricoeur alleges thatto understand a text, the reader must “unfold” the possibility of be-ing indicated by the text, an act he refers to as appropriation [104,pg. 55]. In other words, interpretation involves self-conscious re-flection on both the proposed meaning of the text, and how sucha meaning might contribute to one’s own self-understanding inrelation to others (its author). Ricoeur claims the creativity of liter-ary narrative and self-identity are driven by this process of “imag-inative variations” on an underlying invariant structure of text orcharacter [89, pg. 148].Once we understand digital behavior as text, we can go one stepfurther and apply the insights of Jacques Derrida towards the goalof BBD interpretation. Derrida asserts that speech requires pres-ence, which makes the context of the utterance clear [30]. Tradi-tionally, you had to be near someone to hear what came from theirmouth. This nearness provided the context for the utterance: incase of any confusion, its referent could be pointed to or gesturedat by the speaker. Meaning could be dialogically negotiated rela-tively easily. But the power of writing is that it detaches the writ-ten sign (the word/signifier/symbol) from the writer in its origi-nal, intended context (the signified). Derrida claimed that writingis the death of the author because it still functions as a sign in hisabsence [31, pg. xli]. We can interpret the written mark in any con-ceivable way once we have broken it from its original context ofproduction. Derrida’s point is this move from presence (speech) toabsence (writing) is both problematic and creative. Re-presentationis an attempt to recreate the presence of the author of the text insome new, different context.The point in introducing Derrida is to show that once detachedfrom the presence of data subjects, personal data become “publicsigns” which can be algorithmically manipulated and combinedin novel ways by RS designers. The downside is that we can nownever know the original “true” intention in the absence of the orig-inal act of data production. We must be satisfied to re-present andnot present the author of the data. We thus face a crisis of interpre-tation. What to do? If we follow the GDPR, we let the data subjectsthemselves decide.
Sociologist Anthony Giddens introduces the idea of structuration to analyze how societies evolve in terms of systems, structures, andrules. Systems are the relations between actors, organized as reg-ular social practices, while structure is the unseen rules and re-sources deriving from the system; finally, rules are procedures forsocial interaction which constitute meaning and sanction variousconduct [71, pg. 59]. Structuration emphasizes the recursive dual-ity of social structures–they have both synchronic and diachronicaspects–implying that by performing behaviors and following rules,agents simultaneously change and cement existing social struc-tures.In Giddens’ view, it is mistaken to believe there were agentsbefore societies, as was assumed by Locke or Hobbes. Instead, Gid-dens argues all persons are born into existing societies, and socialstructures pre-exist for them as the ongoing activities of members.Agents become persons in a society, and by their actions withinit, reproduce and transform it. This view suggests two opposingconclusions: the actions of agents in society are to some extent de-termined by existing structures and rules in society, yet by actingeach actor transforms society and gives it a unique history. Thephilosophical question is, how much of what we do in society isdetermined by its structure and how much derives from our capac-ity as free agents?A similar question arises in our interactions with digital envi-ronments. As an analogy, we can think of reinforcement learning(RL) used in RS. The goal of such an RL system is to find an optimalrecommendation policy as it continually explores and exploits itsstate space [45]. This adaptive process is what drives RL person-alization and its various applications in search, ranking, and web-page/app optimization. In the contextual bandit approach, for in-stance, exploitation translates to recommending items predicted tobest match users’ preferences, while exploration translates to giv-ing random recommendations hoping to receive more “feedback”[114].As one prominent example, Facebook uses its open source Hori-zon platform to send “personalized” push notification and updatesto millions of users. [47] explains how Facebook uses a MarkovDecision Process (MDP) to serve personalized recommendations:The Markov Decision Process is based on a sequenceof notification candidates for a particular person. Theactions here are sending and dropping the notifica-tion, and the state describes a set of features about theperson and the notification candidate. There are re-wards for interactions and activity on Facebook, witha penalty for sending the notification to control thevolume of notifications sent. The policy optimizes forthe long term value and is able to capture incremen-tal effects of sending the notification by comparingthe Q-values of the send and don’t send action.Yet this approach poses many ethical questions, besides the ob-vious ones related to using a set of predefined user features. Theother issue is the extent to which nonconscious goal-directed be-havior may be driving many of the observed trajectories given aspecific policy. But there are deeper problems of interpretation due rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli to the positivist formalization of RL, in which it is assumed thetruth is absolute, a collection of brute facts seen from a view fromnowhere (see, e.g., [33]).First, the interpretations of states (e.g., the sets of categories de-scribing the person and the notification) in the agent’s state spaceare pre-defined by the system designers. Second, the interpreta-tions of actions (e.g., sending and dropping notifications) in theaction space are defined by system designers. Third, rewards arechosen by system designers and typically try to maximize thingslike click through rate, dwell time, or business revenue, which mayhave little to no value from the data subject’s perspective. Fourth,any constraints on the set of allowable policies from within the pol-icy space are set by system designers. Taken together, these prob-lems cast doubt on aspirations of personalization by RL systems,if we are to take the person and his epistemic capacities seriously.Few if any of the design parameters capture what the user wouldreflectively endorse as the interpretation of an action or state, orthe selection of the reward to be maximized. These tensions sug-gest an area ripe for research on how a person’s moral values caninfluence choices such as the reward and criteria for allowable poli-cies. We also see promise in connecting “teacher advice” for safeexploration [45] by RL with persons’ moral values and personalidentities. This section lays out the basis for our reading of the GDPR, whichwe believe supports a new focus on narrative accuracy as an ori-enting ideal for RS. By the end, we hope to justify the legal scholarMireille Hildebrandt’s definition of privacy (in the European con-text) as, “the freedom from unreasonable constraints on the con-struction of one’s own identity” [60, pg. 80].Spurred by revelations of US government surveillance programs,the ubiquity of AI/ML in industry, and the “datafication” of soci-ety, European policymakers aimed to re-conceive the role of dig-ital technology in society [52]. Besides creating a “Single DigitalMarket” across Europe, lawmakers also hoped to strengthen therights of individuals to protect and control their personal data. Thedistinctive nature of the European experience and philosophicaltradition is reflected in the GDPR and distinguishes it from otherrecent regulations in California (CCPA) and China (CybersecurityLaw, CSL).The result was the European Union’s 2018 General Data Pro-tection Regulation (GDPR), which updated the 1995 Directive andbuilt on the legal foundations of the 2000 Charter of FundamentalRights of the European Union (CFR). The CFR recognizes funda-mental rights to privacy and protection of personal data for all per-sons in the “human community.” We note that one reason for up-dating the 1995 Directive to the GDPR was to maintain consistencywith the CFR’s rights to data protection and privacy [113]. Accord-ingly, much of the content of the GDPR was included, not becauseof anything related to technological advance or business concerns,but because it was needed for legal coherence with broader Euro-pean ideas about human rights and privacy.The GDPR addresses the storage, collection, and processing ofpersonal data. The GDPR also regulates the automated processingand algorithmic profiling of personal data. These terms were left intentionally vague as to encompass future developments in tech-nology [91]. In the language of the GDPR, data subjects are “naturalliving persons”; personal data are “any information relating to anidentifiable natural person”; and algorithmic profiling is any kindof “automated processing of personal data used to predict a naturalperson’s interests or preferences” (among many other predictivetargets) with “significant” (legal or otherwise) effects on the nat-ural living person. Personalization, and therefore RS, falls underalgorithmic profiling.Unlike the US, where personal data processing occurs largelyvia an “opt-out” mechanism, the European approach is based ondata subjects “opting-in” to processing [111]. In other words, EUdata subjects must decide to give consent to personal data process-ing when other legal bases of processing are not present. When nolegal bases are present and the data subject has not consented tothe processing of his personal data, processing cannot take place. Under the GDPR, data subjects inherit many of the same rightsthey had under the 1995 Directive, plus a few notable additions. Ar-ticles 12-23 spell out these rights. Data subjects now enjoy the rightto be forgotten (data subjects can request deletion of their data) andthe right to data portability (data subjects can request a portable,electronic copy of their data).Generally speaking, the rights of data subjects under the GDPRcan be categorized as related to transparency (i.e., clear and un-ambiguous consent, communication with data subjects should beclear and easily intelligible), information and access (i.e., who col-lected the data and for what purpose(s)), rectification and erasure(i.e., allowing data subjects to correct false information and deleteold information), and objection to (automated) processing (i.e., re-moving consent to processing of any personal data including algo-rithmic decision-making).The GDPR places emphasis on the broad social effects of per-sonal data processing. Recital 4 states that the purpose of personaldata processing is to “serve mankind.” Consequently, an individ-ual’s rights to object to processing are not absolute; they must beweighed against broader societal benefits of such processing. Theprocess of weighing is called the principle of proportionality . Anexample of such a right is the new right to be forgotten, which, insome cases may conflict with the right to information access, andother retrieval and archiving goals, especially when dealing withpublicly accountable figures, such as politicians [2]. In this sense,the GDPR can be seen as taking somewhat of a utilitarian approachto personal data processing.
Our goal is to articulate the underlying values of the GDPR anduse them to formulate foundational principles for RS design. Theseprinciples are valuable because they have withstood intense philo-sophical scrutiny over centuries. The rights given to data subjectsunder the GDPR reflect a certain European understanding of thehuman person and have evolved over time. Ultimately, from theEuropean perspective, data protection and privacy are tools aimedat preserving human dignity [41, 69, pg. 89]. Article 6 lays out the six lawful bases of personal data processing6 eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020,
Accordingly, two crucial legal notions must be introduced: infor-mational self-determination and its predecessor, the right to the freedevelopment of one’s personality . Informational self-determinationis defined as “an individual’s control over the data and informationproduced about him,” and is a necessary precondition for any kindof human self-determination [92]. In turn, self-determination is aprecondition for “a free democratic society based on its citizens’capacity to act and to cooperate” [92].It is worth quoting the 1983 German Federal Constitutional Court’sdecision to get a feel for its perspective [92]:The standard to be applied is the general right to thefree development of one’s personality. The value anddignity of the person based on free self-determinationas a member of the society is the focal point of the or-der established by the Basic Law (Grundgesetz). Thegeneral personality right as laid down in Art. 2 (1)and Art. 1 (2) GG serves to protect these values-apartfrom other more specific guarantees of freedom-andgains in importance if one bears in mind modern de-velopments with attendant dangers to the Human per-sonality.In sum, the right to informational self-determination derivesfrom a more basic and older right to the free development of one’spersonality, found in the German Basic Law. Self-determination isnecessary to uphold the dignity of the human person in societyand is inherently tied to one’s capacity for political participation.This suggests that participatory design of recommender systemscould indirectly play a role in fostering political participation inthe public sphere.We note that the European concept of freedom is not an “an-archic” freedom that conceives of the person as distinct and sepa-rable from society. Rather, it is broadly in line with more modernsocial constructionist thinking that views language as a shared so-cial activity. Language and knowledge are not something one pos-sesses in one’s head, but “something people do”–as a “sociorationalprocess” deriving from a communal “negotiated intelligibility” [48].We will return to these ideas when looking at Hegel’s thought.
To really understand the GDPR and its view of the person, we mustview it within the broader context of the project of EU integration.The EU must somehow unite many individual, self-determiningmember states, each with its own history, culture, and language(s)under a common EU flag. The European project is about how tomaintain national identity in the face of political integration andhomogenization [54]. In other words, how can the EU constructa supranational identity while respecting the autonomy and self-determined identity of member states? The philosopher and social theorist Jürgen Habermas providesan influential model to understand the European project and therole of data protection in political participation. Building on theKantian/Enlightenment devotion to reason, Habermas views thedemocratic process as fundamentally based on communication and In the case of the GDPR, individual member states have derogations (exceptions)to spell out the details for their particular national implementations of the EU-wideRegulation. interaction with others in a public sphere of argument and debate[54]. Habermas rejects the metaphysical idea of an abstract, ahis-torical, and universal “truth,” replacing it instead with broad con-sensus and intersubjective agreement arising from the applicationof the process of communicative reason .Truth claims are valid to the extent to which they were gener-ated by a process of deliberative argument with others, in goodfaith, and according to fundamental principles (pragmatics) of dis-course. Instead of asking what a moral agent could will–while avoid-ing self-contradiction–as a universal maxim for all, Habermas askswhat “norms or institutions would a communication communityagree to as representing their common interests after engaging ina special kind of communication or conversation?” [10, pg. 24].For Habermas, the role of the media is crucial in developing in-formed persons who can function in the public sphere. Yet the massmedia wields disproportional power to influence the opinions andideas of private persons, limiting their ability to engage in deliber-ative politics [54]. Perhaps most importantly, mass media can leadto passive a citizenry with little experience in face to face politi-cal dialogue and its norms of reciprocity and argument. Further,mass media have the power to influence public opinion throughthe framing and selection of certain views and agendas at the ex-pense of others.Consequently, we can understand data protection regulation asone antidote to the power asymmetries wielded by BBD platforms.Persons should participate in the project of deliberating upon anddeveloping their own unique identities based on the personal datathey generate in apps and on devices. As the legal scholar [69, pg.12] notes, data processing “exacerbates the information and powerasymmetries between individuals and those responsible for per-sonal data processing.” In short, we see the GDPR as providingthe data subject with the legal rights necessary to be an informedparticipant in one’s self-determination and identity formation. Buthow should we understand the notion of identity as it relates toBBD?
We claim the GDPR echoes themes from both the European En-lightenment and counter-Enlightenment. On the one hand, it stressesthe conscious, reflective , and rational aspects of identity construc-tion. On the other, it emphasizes individual expression and rejectsthe goals of scientific absolutism , reductionism , and determinism [11]. As such, it aims to provide the minimally-necessary condi-tions for the exercise of unique human capacities. These capacities,identified and examined by key European philosophers, lay out thebasis for these rights.We turn to philosophy because the grounds of human rights ul-timately rest on moral considerations, not political ones. On thisview, the “fundamental conditions for pursuing a good life are var-ious goods, capacities, and options that human beings qua humanbeings need, whatever else they (qua individuals) might need, inorder to pursue the basic activities” [24, pg. 82]. As stated above,this includes self-development and an ability to participate and col-lectively deliberate in the political arena. Explicit Consent, Article 7.
The GDPR’s focus on explicit consentfor data collection and processing is one way in which conscious, rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli reflective aspects of human experience are valued. For instance,the European Data Protection Board (EPDB) recently clarified itsdefinition of clear and unambiguous consent to processing :Based on recital 32, actions such as scrolling or swip-ing through a webpage or similar user activity willnot under any circumstances satisfy the requirementof a clear and affirmative action: such actions may bedifficult to distinguish from other activity or interac-tion by a user and therefore determining that an un-ambiguous consent has been obtained will also notbe possible. Right to Human Intervention (”Human in the Loop”), Article 22.
Be-sides allowing data subjects to opt-out of automated processing,Article 22 has two key provisions: 1) data subjects have a right toobtain human intervention; 2) data subjects can contest the auto-mated decision (and can also access the personal data used to makethe decision) [113]. Additionally, data subjects must be “informed”about any automated-processes and provided with “meaningful in-formation” about the logic of the decisions and the possible con-sequences of such automated-profiling (see, e.g., Recital 71). Such“meaningful information” also includes the ability to “obtain an ex-planation of the decision reached” (Recital 71).Recommendations, particularly in morally-salient contexts, suchas dating or job hunting, potentially fall under this provision. With-out knowing that such profiling–even with a “human in the loop”–is occurring, and without understanding how the profiling wasdone, data subjects’ rights to due process may be undermined.The GDPR’s treatment of automated profiling (personalizationin our case) belies an underlying distrust in the notion of quan-tifying the human experience. This is a theme explicitly tackledby German philosophers of the Sturm und Drang movement andRomanticism more generally. Herder, for instance, eschewed theEnlightenment tendency to fit the particular under the general pat-tern, to quantify what he believed was inherently qualitative [11].For him, truth and goodness were not ideal, static Platonic forms,but relative to individuals residing in cultures with unique histo-ries. If competing goals and values could not be resolved under ageneral algorithm, as Romanticists such as Herder believed, thenkeeping a human in the loop is one way to deal with the inevitableproblem of value conflict in the ethical realm. The Kierkegaardianconflict of subsuming the infinitely complex individual under anabstract universal rule also can be seen here [9]. We detail more ofthese ideas in section 3.4.
The Right to Be Forgotten, Article 17.
The genealogy of such a lawtraces back to the French le droit à l’oubli (the “right of oblivion”)which allowed convicted criminals who had served their time andhad been “rehabilitated” to object to the publication of certain factsabout their imprisonment [91]. Viviane Reding, then Vice-Presidentof the European Commission, claimed three goals for the right:strengthening individuals’ rights to data protection, fostering an Accessed 30 June 2020, available athttps://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202005_consent_en.pdf Due process is a foundational principle of modern legal systems guaranteeing thatprocedures of the law are fairly applied to individuals. In the case of automated pro-filing, due process means notification that one is being profiled and the existence ofsome procedure through which one can contest the results of the profiling.
EU-wide “Digital Single Market,” and imposing greater responsi-bility on data controllers. According to Reding, it is “importantto empower EU citizens, particularly teenagers, to be in control oftheir own identity online.”
One question is whether this rightapplies to, for example, CF-based RS trained on the data of userswho have exercised their right to be forgotten.The right to be forgotten also mirrors the natural workings of au-tobiographical memory, in which the act of forgetting is essential inconstructing a self-narrative over time [22]. Narratives are the out-come of a continual cognitive process by which human experienceis shaped into “temporally meaningful episodes” [86]. Persons ac-tively take part in constructing self-narratives to understand them-selves, their behavior, and their roles in society [74]. The ability toreconstruct one’s personal narrative is particularly important foryoung people [72]. We explore narrative identity more deeply insection 4.
Rights to Access and Modify Personal Data, Article 12.
Against sci-entific determinism, the GDPR upholds individual subjectivity andexpression in creating one’s digital representation [92], where thevery choice of exercising one’s rights inserts noise into the predic-tive signal of our in-app and on-device behaviors. Data subjects’rights to access, delete, and modify their personal data grant themthe ability to both more deeply understand themselves and subjec-tively narrate their personal identities over time. Though the tech-nological means for doing so are currently limited, data subjectscan already modify their names, their genders, and drop national-ities [2] .The right to modify one’s personal data to fit one’s life narrativeprovides an expression of agency in that data subjects could poten-tially choose the description under which their behaviors are under-stood, thus solving what Pariser calls the “one identity problem”of personalization [82, pg. 65]. The concept of reificiation is par-ticularly applicable here. Reification is the process through whichpeople believe the contingent manifestations of ideology are under-stood as facts of reality [85]. As the sociologists Berger and Luck-mann write, “Reification is the failure to recognize that [social or-der] as humanly engendered. The decisive question is whether onestill retains awareness that... the social world was made by men–and therefore, can be remade by them”[85]. Put simply, the fearis that, as BBD platforms increasingly collect our personal data,their “thin” versions of our digital identities may begin to replaceour “thick” personal identities. In the extreme, these thin digitalidentities may then be confused with our “true” ones, to our owndetriment.As the process of reificiation shows, without the GDPR’s rightsto access and modify their personal data, data subjects may fail torealize the extent to which their identities and behaviors have been Accessed August 2nd, 2020, available athttps://ec.europa.eu/commission/presscorner/detail/en/SPEECH_10_700 Accessed August 2nd, 2020, available athttps://ec.europa.eu/commission/presscorner/detail/en/SPEECH_12_26 [72] finds that young adults may experience any of several identity statuses whenexploring identity options and committing to identity goals. For our purposes, thestatuses of foreclosure and diffusion are most relevant. Young adults in foreclosurehave never fully “explored and questioned the goals and values that were available”to them. Further, diffusion is when young adults fail to make any commitments: “theydo not know what they want (or value) in adult life, and they are not, at the moment,looking to know” [75, pg. 193].8 eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, classified according to the arbitrary rules of the BBD platform. RSdesigners, in order to avoid this problem of “presumptuousness”[63], could provide users with notifications to confirm their inten-tions and goals. For example, one step towards this would be toask data subjects, “You did XYZ, but did you mean ABC?” Doingso would give them the agency to decide the description underwhich their behaviors should be understood. Right to Data Portability, Article 20.
Further, rights to access anddownload one’s personal data in portable, machine-readable for-mats permit the reflective endorsement of in-app and on-device be-haviors. Coupled with open source data analytics platforms, datasubjects could potentially analyze their own personal data to learnmore about their captured behaviors on BBD platform. This is os-tensibly the goal of the “quantified self” movement [68]. Down-loading and then analyzing their own data allows data subjects toput a mirror to themselves in a new way. Even if they discoverthat their captured data bears no relation to their self-narratives,they have seen themselves in a new light. Moreover, the dignity –meaning the worth or fittingness–of the human person, a crucialnotion in the GDPR [41], is expressed in the focus on reflective en-dorsement of behaviors and habits. The GDPR affirms the Socraticbelief that a life without this self-reflective capacity is not properlya human life.
To illustrate some of the major threads from which we base our in-terpretation of the GDPR, we will highlight the ideas of just a fewof the many Enlightenment and Romantic philosophers. We brieflyhighlight ideas from Kant, Hegel, Hamann, Herder, and Kierkegaardto bring out these themes.
Immanuel Kant.
Kant’s attitudes towards autonomy, self-determination,and the public use of reason are best illustrated by a short pas-sage from his 1784 essay,
Answering the Question: What Is Enlight-enment? Laziness and cowardice are the reasons why such alarge part of mankind gladly remain minors all theirlives, long after nature has freed them from externalguidance. They are the reasons why it is so easy forothers to set themselves up as guardians. It is so com-fortable to be a minor. If I have a book that thinks forme, a pastor who acts as my conscience, a physicianwho prescribes my diet, and so on–then I have noneed to exert myself. I have no need to think, if onlyI can pay; others will take care of that disagreeablebusiness for me.This excerpt reveals much about the European legal conceptionof autonomy and self-determination through one’s critical use ofreason. Our dignity rests on our capacity for reason and is the nor-mative foundation for securing various data protection and humanrights. To fail to exercise these rights is an affront to our dignity asrational beings. We are obligated to use them to progress in our self-understanding, to break free from our self-imposed immaturity. The German
Unmündigkeit is here translated as minor , but has the sense of theEnglish immature or dependent. As rational beings, we possess a potential much greater than ourhabits, instincts and impulses. Realizing this potential, however,requires courage. For Kant, passivity is the mark of immaturity–thus we see Habermas emphasizing the process of communicative action and critical debate in the public sphere.In the modern context of RS, we see Kant’s disdain for the blindacceptance of recommendations without critical reflection of whetherthey might be reflectively endorsed by us [43]. We must dare toknow how these recommendations were made and understand thekinds of personal data involved in generating them. For instance,to what extent are they based on the predictive signals of our non-conscious goal directed behaviors? What about when our in-appbehaviors are the result of dark patterns and hypernudges? Kantwould implore us to take an active role in the public debate aroundthe regulation of our personal data.Besides his important contribution to Enlightenment thinking,Kant’s metaphysical conclusions famously resulted in a distinctionof human knowledge into two domains: the intelligible (things inthemselves) and the sensible (things as they appear to us). Kantconcedes human knowledge has its limits, and we cannot knowthings in themselves, though we may employ reason to speculateon their nature. Insofar as we possess reason and act “in accordancewith maxims of freedom as if they were laws of nature,” we belongto a “kingdom of ends,” although our physical bodies firmly residein the sensible, predictable realm of the natural world [83, pg. 130-131]. For Kant, humans differ from other animals in this capacityto choose right or wrong (to follow the categorical imperative ),and this capacity grounds our ascription of moral desert.Later influential “neo-Kantians,” such as Max Weber and Wil-helm Dilthey, followed Kant’s split between lived mental experi-ence and our corporeality and aimed to delineate the “human sci-ences” from the natural sciences (i.e., Geisteswissenschaften and
Naturwissenschaften ). Dilthey writes, “The hallmark of inner per-ception consists in the reference of a fact to the self; the hallmarkof outer perception in the reference to the world” [32, pg. 36]. Thehuman sciences were different in that they allowed for not just ex-planations of events, but for “understanding” (
Verstehen ) the phe-nomenological experience of persons [32]. This ontological differ-ence requires different epistemological methods of inquiry. For Dilthey,the method was “epistemological self-reflection” on moral facts, so-cial relations, consciousness, and freedom. These kinds of thingscould not be studied using methods from the natural sciences, whichaimed to formulate universal, atemporal laws of nature. But Diltheyargues socio-historical reality cannot be reduced to a single princi-ple or formula. Similarly, the human sciences (politics, anthropol-ogy, sociology, etc.) could not be founded on the approach based onthe mechanical, mathematical model that Laplace used to predictthe behavior of astronomical bodies [71, pg. 39].
GWF Hegel.
Hegel’s philosophical project is notoriously compli-cated, but for our purposes we only consider a few important ideasin their more modern form. Hegel famously explained the humanneed for self-recognition in his master-slave dialectic. For Hegel, The categorical imperative roughly states: one should only act according to a rulewhich one would also will to be a universal rule for all others in any particularcircumstance.)9 rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli self-consciousness arises from the recognition of oneself by “oth-ers,” who are also self-conscious agents engaging in a kind of mu-tual recognition. The “slave” is marginalized in the sense that his“self-consciousness is incomplete since he cannot find a reflectionof his own autonomy and personhood in the master’s eyes” [20, pg.57]. In other words, self-realization–recognizing oneself as “self”–fundamentally depends on the social recognition of other autonomousobjects (persons in society) who recognize your existence as anindividuated person with unique desires and goals. Our personalidentities depend on this. If we are to see ourselves as independentand autonomous agents, we are required to see others this way too[20, pg. 57]. Hegel did not see us as disembodied Cartesian egos,but as fundamentally interdependent and social creatures embed-ded in social environments.Hegel represented a major advance from Kant in that he rejectedthe idea of an ahistorical, abstract, static, and universal faculty of“reason” separate from historical practices and cultural attitudes[38, pg. 62]. In its place, Hegel inserted a teleology, or final goal,to reason towards which it inexorably moved in its quest for self-realization [62]. Hegel’s addition of a temporal, historical aspect tologic means that something may be both A and not-A, perhaps ata different time. This idea will influence thinking about narrativeidentity over time.Recently, social theorist and philosopher Axel Honneth updatedthese Hegelian ideas to include psychological notions of self-confidence,self-esteem, and self-respect in identity formation [61]. The Hegeliantwist is that these can only be acquired through intersubjective,social experience with others. Honneth believes that justice andmutual recognition of personal identity are intrinsically related:“The justice or wellbeing of a society is measured according tothe degree of its ability to secure conditions of mutual recogni-tion in which personal identity formation, and hence individualself-realization, can proceed sufficiently well” [61]. Hegel’s ideasabout identity, dialectic, and mutual recognition have been devel-oped and used by prominent gender theorists, such as Judith But-ler, and various social movements for marginalized social groups,which are founded on claims of identity and autonomy [103, pg.9].
JG Hamann and JG Herder.
Hamann and Herder were the quintes-sential anti-Enlightenment thinkers who opposed Kant’s glorifica-tion of universal, abstract reason. Hamann influenced the youngerHerder and saw himself fighting against the increasing quantifica-tion and mechanization of life after Newton. Hamann’s attack onKant and other champions of reason could today be just as eas-ily directed at the data scientist. Enlightenment thinking enablesthe treatment of men as machines. Science, Hamman claims, wasonce an expression of our finite and creative capacity, but has nowbecome a “dictator which determines [our] position, morally, po-litically and personally” [12, Ch. 5, Par. 20].Herder is arguably the first to give voice to the modern notionsof authenticity, personal expression, belonging, and incommensu-rable ideals [11, Ch. 3, Par. 26]. When we speak of “authenticity,”or “living one’s truth,” we draw on Herder’s thought. For Herder,each of us has an original way of being human, making our quest for authenticity and self-realization unique [101, Ch. 3, Par. 9]. Nor-malization should happen within, not between persons. Art is re-ally the expression of a person’s unique voice, attitude, and wayof life of its author. Consequently, the object of art is inseparablefrom the identity of the artist; the subject could not be arbitrarilychopped off from the object without also destroying the identityof the object. Moreover, Herder was one of the first to speak of ashared language and soil as a bond between persons [11, Ch. 3, Par.31]. Group identity and belonging confers a recognizable patternto the activities of its members, giving rise to distinctive cultures,such as “German” or “Chinese” ways of doing things. There is no“universal” language or identity as human understanding is rootedin the shared symbols and languages of this root society.Lastly, Herder rejects the Enlightenment desire to find determi-nate answers to all questions by the application of a single methodwhose results might be reduced to a set of mutually derivable propo-sitions [11]. Given our natural penchant for expression and be-longing, Herder urged us to develop what we are to the fullest ex-tent we can, to articulate our particular views in the richest way.Herder’s praise of diversity was the apparent antidote to Enlight-enment unity. Today we see his ideas taking resurfacing in identitypolitics, postmodern relativism, and nationalism.
Søren Kierkegaard.
Kierkegaard’s ideas can be simply summarizedas the primacy of the subjective over objective. Kierkegaard’s legacyis the way in which he contrasted the Enlightenment universalismof Kantian ethics (his so-called categorical imperative) with theuniqueness and “singleness of the single one” [9, Ch. 7, Sec. 3, Par.6]. Kierkegaard and others after him rejected the idea that the par-ticularities of individual situations should be abstracted away fromin order to reach universal rules of conduct all reason-possessingpersons could re-derive. Instead, the particularities of the individ-ual situation are primary: (lived, embodied, fleshly) existence pre-cedes essence (abstract, faceless, unitary). He thus rejected the Kan-tian distinction of an unknowable abstract metaphysical realm froman observable, predictable, realm of experience. Kierkegaard con-ceived of human existence inherently textured by self-defining choiceswhich are “too dense, rich, and concrete” to be represented as atom-istic concepts [9, Ch. 7, Sec. 2, Par. 9]. The human experience can-not be abstracted into a concept, but must be lived, experienced,and embodied as a subject of life.
From Kant’s moral philosophy, we might claim that nonconsciousbehaviors are outside the realm of the moral because they were notthe result of applying a universal maxim. This means that asser-tions about our logged behaviors reflecting our interests and pref-erences are not truly moral statements, which seems contradictory.After all, we might define our interests as preferences preciselyas those things which we believe to be good. In any case, a Kan-tian might conclude that if the logged behavior used to make pre-dictions and recommendations stems from nonconscious sources,these should not be taken to represent our moral identities.From the Romanticists Hamann and Herder, we might claimthat that data scientists at BBD platforms create “arbitrary divi-sions of reality” (i.e., selectively log behaviors in apps and devices)in order to build “castles in the air” (predictive models of one’s eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, “true” preferences) [11]. Put into today’s language, pre-defined cat-egories of recorded behavior, which are in turn used by RS to makepredictions, are categories of the data scientist’s choosing and donot exhaust the reality of the data subject as he interacts with a de-vice or app. Even in so-called “context-aware” recommender sys-tems [1], interpretations of the various contexts are one-sided andfail to account for contested interpretations by the data subject.This problem will become worse if BBD platforms continue mov-ing towards reinforcement learning-based RS.From Hegel’s story of the master-slave relationship we can de-rive an important principle for participatory RS design. We claimthe GDPR’s right of access and modification can support this dual-process of identity formation, particularly when combined witha narrative approach that evolves over time. Much as the master-slave dialectic shows, downloading and viewing one’s logged be-haviors used by an RS permits one to see oneself from the gaze ofthe system designer and potentially choose to change one’s self-understanding as a result. At the same time, the act of negotiatingmeaning on the part of RS designers opens the possibility of en-larging their own self-understanding as a result of understandingthe differing perspectives of others.From Kierkegaard, we could view the GDPR and its rights thatallow one to obtain, modify, and delete one’s data potentially con-front one with a series of self-defining choices. At the same time,Kierkegaard would object to most RS “personalization” that relieson optimization of a global objective function on aggregate users’data, since the parameters of such a function are not strictly relatedto my personal data alone. Similarly, Kierkegaard might argue thatCF and its reliance on the logged behaviors of others is irrelevantfor my particular choice . The story of human communication properly begins in logos , ordiscourse. Logos subsumed the concept of everything from word,thought, narrative, story, to reason and rationale [37]. Only later,through a distinction made between mythos and logos by Plato andAristotle, did logos become associated with a specific kind of philo-sophical and technical discourse, leaving us with the word logic today. Mythos, on the other hand, became associated with poetry,while rhetoric uneasily fell somewhere in between [37].The work of Jacques Derrida has notably questioned the assump-tions behind this split, which has deeply influenced the course ofWestern metaphysics and mathematics. Other important thinkers,such as Jürgen Habermas and Chaim Perelman [84], have triedto reclaim reason’s original connection with rhetoric, communi-cation, and dialog as a compromise position.Today, the modern concept of narrative plays an increasinglyunifying role in a variety of disciplines, from clinical, developmen-tal, social, and cognitive psychology to philosophy, literature, andeven neuroscience [25, 97]. The discussion here draws mostly fromthe philosophical perspective. Our claim is that narrative identity,with its dynamic and diachronic aspects, can unify both moral andsocial identity and possesses an explanatory force that accountsfor human meaning and experience.
Self and Identity.
According to identity theorists in sociology andpsychology, the self is fluid and occupies multiple social roles orgroup identities that coexist and vary over time. The dynamism ofthe self-concept is reflected in its numerous representations. Per-sons have representations of a past, present, and future self; andalso an ideal, “ought,” actual, possible, and undesired self [73]. Iden-tity theorists highlight how a unified self-conception arises fromthe variety of meanings given to various social roles the self occu-pies [100].
Social and Moral Identity.
Common types of social identities relateto one’s ethnicity, religion, political affiliation, job, and relation-ships. People may also identify strongly with their gender, sexualorientation, and various other “stigmatized” identities, such as be-ing homeless, an alcoholic, or overweight [28]. Social psycholo-gists generally agree our self-concept is one of the most importantregulators of our behavior.Psychologist and linguist Michael Tomasello contends that ourmembership in a linguistic community binds our social and moralidentities. The reasons we give for our behaviors are related to ourrole and status within this community. Starting at a young age, chil-dren must make their own decisions about what to do and whichmoral and social identities to form [105, pg. 288]. Children makethese decisions in ways justifiable both to others in their commu-nity and to themselves. The connection between the social and themoral lies in the way in which the reason-giving process to otherswas, in time, internalized into a form of normative self-governance.Our psychical unity requires we do certain things in order to con-tinue to be the persons we are, seen from both the inner perspec-tive (self, private) and outer (other, public).Personality psychology has also begun to study the formation ofmoral identity. For example, [3] show how highly important moralidentities–the collection of certain beliefs, attitudes, and behaviorsrelating to what is right or wrong–can provide a basis for the con-struction of one’s “self-definition.” In short, our social and moralidentities are crucial to both our self-understanding and our un-derstanding of others.
Narrative Identity.
Our real focus, however, is on narrative identity,as it encompasses both moral and social identity. Moral and socialidentities are synchronic (cross-sectional) structures. They are howwe represent ourselves to ourselves at particular points in time. Butwe have not yet explained how these identities evolve over time.For that, we need a diachronic (longitudinal) account of identity.The discussion below largely follows work by the influentialpsychologist Jerome Bruner and the philosopher Paul Ricoeur. Ac-cording to Bruner, narratives “[operate] as an instrument of mindin the construction of reality” [15] and consist of many unique fea-tures including: • Diachronicity : narratives account for sequences of orderedevents over human time, not absolute “clock time.” • Particularity : narratives are accounts of temporally-orderedevents told from the particular embodiment of their narra-tor(s). Most of these social identities would be sensitive personal data under the GDPR, andthey would be reviewable and modifiable by data subjects themselves.11 rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli • Intentional state entailment : within a narrative, reasonsare intentional states (beliefs, desires, values, etc.) which actas causes and/or explanations. • Hermeneutic composability : gaps exist between the textand the meaning of the text. Meaning arises from under-standing relations of parts to whole. • Canonicity and breach : narratives are more than just point-less “scripts.” They often arise in order to explain an anoma-lous event after the fact. • Referentialility : realism in narrative derives from consen-sus, not to correspondence to some “true” reality. • Genericness : there are cultural patterns to various narra-tives, each expressing a version of human plights and tragedies. • Normativeness : if narratives arise after breaches, this pre-supposes certain norms guiding our expectations about whatwill/should happen. • Context sensitivity and negotation : readers of a text “as-similate it on their own terms” thereby changing themselvesin the process. We negotiate meaning via dialogue. • Accrual : an individual narrative can spread throughout aculture, thereby allowing it to grow and evolve collectively.In contrast to Bruner’s more psychological account, Paul Ricoeurcombines an account of ethics with narrative identity. His goal isnothing less than to express a vision of the “good life with and forothers in just institutions” [89, pg. 240]. The philosophical questionhe poses is: how can one and the same person grow and changeover time? How can such seemingly contrary concepts as identityand diversity be reconciled in one person?He answers via the notions of idem and ipse identity [89]. Idemis synonymous with sameness, typically associated with our char-acter. Ipse is selfhood of self-constancy. It is what allows others tocount on a person, and what makes him accountable when makingpromises to others. Ricoeur’s narrative account solves the problemof connecting permanence in time of character (sameness) withself-constancy (a kind of promise to vary within limits) [89, pg.166]. Through the diachronicity of narrative, we unite our moraland social identities over time, giving rise to the uniqueness of per-sons. Good narratives and literature arise from imaginative varia-tions between these two poles of sameness and selfhood over time.Ricoeur understands the self as fundamentally the “character-narrator” of its own history, “emplotted” between conflicting de-mands for concordance and a natural, entropic drive towards dis-cordance [89, pg. 141]. Emplotment confers a unity, an internalstructure and completeness to the story [89, pg. 143], giving onea coherent life story. However, if the discordance between eventsin the plot becomes too great, one’s identity becomes threatened.If the discordance grows too wide, one’s personal identity can dis-solve in an “identity crisis.” Similarly, schizophrenia may be viewedas an example where the coherence of one’s narrative identity hasunraveled.Similar to Aristotle’s notion of mythos , narrative events differfrom mere occurrences by virtue of a certain “effect of necessity”or expectation from which they arise. Much as Bruner describes,only after the fact do events become part of a narrative and their Ricoeur states that self-constancy is the response of “Here I am!” when someone inneed asks “Where are you?” [89, pg. 165]. configuration within this narrative–their relation as a part to thewhole of the story–is what accounts for their meaning and explana-tory force [89, pg. 142]. We can only understand the conclusionof a narrative by reference to the earlier parts of the story whichbrought us there. As philosopher Charles Taylor puts it, “what wegrasp as an important truth through a story–be it that of our ownlife, or of some historical event–is so bound up with how we gotthere–which is what the story relates–that it can’t simply be hivedoff, neglecting the chain of events which brought us here” [102, Pt.III, sec. 8].Unlike scientific observations or chronicles of events in sequence,experiences recounted in narratives always have some moral tingeof approval or disapproval, rightness or wrongness. Relating to thehermeneutical issues of context, Ricoeur writes that imputabilityis the “ascription of action to its agent, under the condition of eth-ical and moral predicates” [89, pg. 292]. Finally, in narrative thenotion of absolute time is unnecessary because the character hasthe power to initiate a “beginning of time,” and assign a beginning,middle, and end to an action [89, pg. 147].
In this section we now attempt to bring together narrative identityand narrative accuracy using [44]’s concept of epistemic injustice.In philosophy, epistemology is the study of knowledge and its foun-dations. In order to apply her ideas to the realm of RS, we will, how-ever, need to take some interpretive liberties. This discussion willground the ethical vocabulary for thinking about and discussingthe narrative accuracy of RS and their training data.Typically injustice is understood as a kind of fair or fitting dis-tribution of goods–one receives what one is properly owed. Thatis not quite what Fricker means by the term, as credibility is notreally a finite good [44, pg. 20]. She is interested in injustice as itrelates to disrespecting one in one’s “capacity as a knower.” The as-tute reader will recognize the Kantian themes here: Fricker startsfrom the premise that we know ourselves best and have an equalstake in truth claims. As [93, pg. 8] point out, epistemic injustice“undermines individuals’ trust in their own judgment and reason-ing [and diminishes] their sense of intellectual agency.” There isequally a Hegelian aspect to epistemic injustice in that it requiresa mutual recognition of the perspective and experience of others,particularly those in positions of asymmetrical epistemic power(i.e., data subjects relative to data collectors).
Testimonial and Hermeneutical Injustice.
Fricker develops two formsof epistemic injustice applicable to the case of data subjects re-ceiving personalized recommendations. First, testimonial injustice might occur when prejudice or bias leads a data collector to givea “deflated level of credibility” to a data subject’s interpretationof a recorded action or event, including a recommendation [44,pg. 1]. For example, if a data collector only uses nonconsciously-generated BBD and does not weight explicit feedback, a kind oftestimonial injustice has occurred. Another example might be thata BBD platform allows users to “downrate” bad recommendations,but these are not actually factored into changing to the recommen-dations. From a Bayesian perspective, we can also conceive of testi-monial injustice an example where uncertainty in model selection(ignoring the subjective “model” of the data subject) is ignored in eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, favor of the pre-defined model of the RS designer. Even more gen-erally, we can view it as examples of the problem of data fusion, orinformation quality, with ethical implications.In contrast, hermeneutical injustice may arise when a data collec-tor or data collection platform lacks the “interpretive resources” tomake sense of the data subject’s lived experience, thereby puttinghim at a disadvantage [44, pg. 1]. The fundamental question is, what counts as what? Currently the categories of events recordedby BBD platforms are typically pre-defined by system designerswithout any input from platform users, for instance. If designers ofRS systems do not consider the diversity and richness of data sub-jects’ intended actions, values, and goals while using the system,hermeneutical injustice will be unavoidable. Under one interpre-tation of an event, we may generate statistical regularities, whileunder another we may get different statistical regularities whichbecome encoded in the parameters of ML models. It follows thereis no one “best” representation or encoding of BBD. There are sim-ply different representations under different interpretations aboutwhat counts as what.One way to potentially mitigate this would be for BBD platformsto ask users for explicit clarification of the meaning of key eventsand actions recorded within an app and used in predictive algo-rithms to generate recommendations. We saw earlier that RL sys-tems are at particular risk for hermeneutical injustice towards theirdata subjects.With these ideas in mind, we can say the narrative accuracy ofRS can be low through either weighing certain kinds of evidencetoo low, or when failing to negotiate the meaning of events orrecommendations with data subjects. Fortunately, the GDPR givesdata subjects some legal tools for engaging with and modifyingtheir personal data generated and stored on BBD platforms. So ifdata subjects exercise their rights under the GDPR, and RS design-ers move away from BBD towards more explicit feedback, as in[35], narrative accuracy can be shaped and improved in a commu-nicative process with the data subject.This whole process of seeing oneself through the eyes of oth-ers (the BBD platform and the RS recommendations) leads to self-realization and the opportunity to reflectively endorse certain be-haviors online under a more appropriate description, as seen fromthe perspective of the data subject. When data subjects can choosethe description under which their actions take place and are recorded,it opens the possibility of understanding their experience onlineinstead of forcing it to fit the designers’ interpretation of “whatcounts as what.” This communicative process of narrative shapingthrough dialog with the data subject can be seen as one approachto participatory design of RS.
The downside to emphasizing the subjectivity of human experi-ence in narrative form is that we might lose the possibility of acommon, inter-subjective understanding of “truth.” This is a fre-quent criticism of postmodern thought. The thinking goes that wemust therefore accept nihilism, or its equally-disagreeable twin,relativism. But doing so would be to open the door to differenceand creativity, only to close it by being overwhelmed by what wefind. Fortunately, many thinkers have already grappled with this prob-lem. Although they agree there is no one single method that candeliver absolute knowledge, we can find minimal criteria of sound-ness that can provide a practical grounding for a theory of truth.For instance, we might try to articulate “principles of charity” [26]in interpreting and “good faith” in constructing narratives. We brieflysketch two attempts at grounding the truth claims of narrativeknowing.The philosopher Marya Schechtman has described two “con-straints” on the content of such personal narratives, if we are torealize our self-interests, get moral credit for our efforts, and takeresponsibility for our actions. These are the reality constraint andthe articulation constraint . According to [5, pg. 79] the reality con-straint requires narratives to have some minimal level of coher-ence with reality. The articulation constraint says that the makerof a narrative must be capable of some minimal level of reflectivearticulation of one’s actions and thoughts in order to properly beresponsible for them.In communication studies, Walter Fisher elaborated the notionof narrative rationality and gave qualitative criteria for assessingit [37]. Fisher’s criteria for assessing narratives break down into1) narrative probability (coherence): does the story hang togetherand is it free of contradictions?; and 2) narrative fidelity (correspon-dence): are the reasons given logically sound and do they reflect aconsistent set of values? [110].These examples show that by embracing narrative forms of knowl-edge, we are not forced to abandon all criteria and any hope for con-sensus on “truth.” In fact, as the pragmatist philosopher RichardRorty has famously argued, social convention, rather than repre-sentational similarity, can serve as the basis for a theory of truth[90]. In Rorty’s version of pragmatism, truth is defined relative tohuman goals and purposes which are articulated through a processof communication and social consensus over time.
Humanistic personalization aligns with the spirit of
Human Cen-tered AI [95]. Making and sustaining a coherent digital self-narrativeis a uniquely human capacity which we cannot leave up to othersor outsource to automated agents. This sentiment is shared by theGDPR and IEEE EAD principles. We are characters in the storieswe tell about ourselves. We know which events define us, we knowwhich values drive us, we know the causes (reasons) behind our ac-tions. And if we do not, we have the capacity to try to find out.The corporate owners of BBD collection platforms and RS de-signers may make claims to the contrary based on our observed be-haviors, but we believe that rights to informational self-determinationtrump these assertions. Humanistic personalization promotes indi-vidual autonomy [81] by cultivating unique desires and personal-ities and by giving us the chance to create our own “moral laws”which guide us in our decisions and explain our actions.Ultimately, as postmodernists have pointed out, problems of ethicsand interpretation are inseparable. What we believe to be true influ-ences our decisions about what is right. But if meaning is sociallyconstructed, data subjects alone cannot solve these problems. Itwill take both a community and good faith communication to work rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, Greene and Shmueli out the “rules” of our common language game. The designers of RSwill need to play a larger role in this dialectic of meaning negoti-ation and identity formation in the digital sphere. After all, if theoriginal meaning of category is to “publicly accuse,” the data sub-ject, as a member of the public, should play a part in that process.A major limitation of our GDPR-based approach is that we as-sume our readers will agree with the European conception of thehuman person we have laid out. We assume readers share similarbeliefs about what makes a human person unique and which capac-ities might be so essential as to require rights (and correspondingduties) to secure them. In an increasingly splintered yet intercon-nected digital world, searching for grand narratives of reason andtruth appears misguided. Yet we believe the compromise position,based on narrative understanding and the dialogic participation ofdiverse groups, is the only way to avoid relativism and nihilism.Beyond personalization, a focus on narrative could have wide-ranging consequences for the future of AI/ML. If we are to ever“crash the barrier of meaning in AI” [78], we will need to crashthrough the barrier of narrative. Further, the inherent intelligibil-ity of narrative could be useful in the emerging area of explain-able AI, especially where regulations such as the GDPR give datasubjects rights to clear, understandable explanations of algorith-mic decisions. Lastly, due to its intuitive “explanatory force” [108],narrative explanation could serve as a interesting lens for new ap-proaches in causal modeling. How are events in narratives caused,and how can this be reconciled with claims of causation derivingfrom induction and observation?Skeptics might counter that optimizing for narrative accuracywill require a trade-off in the ability of RS to accurately recommenditems and predict specific behaviors–particularly non-consciousones. Business profits may also be affected. Data scientists mayneed re-training. Nevertheless, the GDPR forces us to ask the ques-tion: Do we ultimately wish to represent ourselves according to theneeds and interests of business or humans ? REFERENCES [1]
Adomavicius, G., and Tuzhilin, A.
Context-aware recommender systems. In
Recommender systems handbook . Springer, 2011, pp. 217–253.[2]
Andrade, N. N. G. d.
Oblivion: The right to be different from oneself-reproposing the right to be forgotten. In
VII international conference on internet,law & politics. Net neutrality and other challenges for the future of the InternetâĂİ,IDP. Revista de Internet, Derecho y Política (2012), no. 13, pp. 122–137.[3]
Aqino, K., Reed, I., et al.
The self-importance of moral identity.
J of person-ality and soc psych 83 , 6 (2002), 1423.[4]
Association, I. S., et al.
The ieee global initiative on ethics of autonomous andintelligent systems. ethically aligned design: A vision for prioritizing humanwell-being with autonomous and intelligent systems, 2019.[5]
Atkins, K.
Narrative identity and moral identity . Taylor & Francis, 2010.[6]
Baird, D.
Thing knowledge: A philosophy of scientific instruments . Univ ofCalifornia Press, 2004.[7]
Balka, E.
Inside the belly of the beast: the challenges and successes of a re-formist participatory agenda. In
Proceedings of the ninth conference on Partici-patory design: Expanding boundaries in design-Volume 1 (2006), pp. 134–143.[8]
Bargh, J. A., Gollwitzer, P. M., Lee-Chai, A., Barndollar, K., andTrötschel, R.
The automated will: nonconscious activation and pursuit ofbehavioral goals.
Journal of personality and social psychology 81 , 6 (2001), 1014.[9]
Barrett, W.
Irrational man: A study in existential philosophy , vol. 321. Anchor,1958.[10]
Benhabib, S.
Situating the self: Gender, community, and postmodernism in con-temporary ethics . Psychology Press, 1992.[11]
Berlin, I.
The roots of romanticism . Princeton University Press, 2013.[12]
Berlin, I.
Three critics of the enlightenment: Vico, Hamann, Herder . PrincetonUniversity Press, 2013.[13]
Bourdieu, P.
Sociologie génerale: cours au collège de France 1981-1983 , vol. 1. Le Seuil, 2015. English Translation.[14]
Broder, A.
A taxonomy of web search. In
ACM Sigir forum (2002), vol. 36,ACM New York, NY, USA, pp. 3–10.[15]
Bruner, J.
The narrative construction of reality.
Critical inquiry 18 , 1 (1991),1–21.[16]
Burke, R., Sonboli, N., and Ordonez-Gauger, A.
Balanced neighborhoods formulti-sided fairness in recommendation. In
Conference on Fairness, Accountabil-ity and Transparency (2018), pp. 202–214.[17]
Callender, C., and Cohen, J.
There is no special problem about scientificrepresentation.
Theoria 55 (2006), 67–85.[18]
Cane, P., and Gessner, V.
Responsibility in law and morality . Hart Oxford,2002.[19]
Carrabis, J.
System and method for determining a characteristic of an individ-ual, Feb. 18 2014. US Patent 8,655,804.[20]
Chazan, P.
The moral self . Routledge, 2002.[21]
Cheney-Lippold, J.
A new algorithmic identity: Soft biopolitics and the mod-ulation of control.
Theory, Culture & Society 28 , 6 (2011), 164–181.[22]
Conway, M. A., and Pleydell-Pearce, C. W.
The construction of autobio-graphical memories in the self-memory system.
Psychological review 107 , 2(2000), 261.[23]
Covington, P., Adams, J., and Sargin, E.
Deep neural networks for youtuberecommendations. In
Proceedings of the 10th ACM conference on recommendersystems (2016), pp. 191–198.[24]
Cruft, R., Liao, S. M., and Renzo, M.
Philosophical foundations of human rights .Oxford University Press, 2015.[25]
Damasio, A. R.
The feeling of what happens: Body and emotion in the making ofconsciousness . Houghton Mifflin Harcourt, 1999.[26]
Davidson, D.
On the very idea of a conceptual scheme. In
Proceedings andaddresses of the American Philosophical Association (1973), vol. 47, JSTOR, pp. 5–20.[27]
De Vries, K.
Identity, profiling algorithms and a world of ambient intelligence.
Ethics and information technology 12 , 1 (2010), 71–85.[28]
Deaux, K.
Social identities: Thoughts on structure and change.
The relationalself: Theoretical convergences in psychoanalysis and social psychology 77 (1991),93.[29]
Deleuze, G.
Postscript on the societies of control.
October 59 (1992), 3–7.[30]
Derrida, J.
Limited inc . Northwestern University Press, 1988.[31]
Derrida, J.
Of grammatology . JHU Press, 2016.[32]
Dilthey, W.
Introduction to the human sciences , vol. 1. Princeton UniversityPress, 1989.[33]
Dourish, P.
What we talk about when we talk about context.
Personal andubiquitous computing 8 , 1 (2004), 19–30.[34]
Dourish, P.
Where the action is: the foundations of embodied interaction . MITpress, 2004.[35]
Ekstrand, M. D., and Willemsen, M. C.
Behaviorism is not enough: betterrecommendations through listening to users. In
Proceedings of the 10th ACMConference on Recommender Systems (2016), pp. 221–224.[36]
Ferguson, M. J.
On becoming ready to pursue a goal you don’t know you have:Effects of nonconscious goals on evaluative readiness.
Journal of personalityand social psychology 95 , 6 (2008), 1268.[37]
Fisher, W. R.
The narrative paradigm: In the beginning.
Journal of communi-cation 35 , 4 (1985), 74–89.[38]
Fleischacker, S.
What is enlightenment?
Routledge, 2013.[39]
Floridi, L.
The ontological interpretation of informational privacy.
Ethics andInformation Technology 7 , 4 (2005), 185–200.[40]
Floridi, L.
The onlife manifesto: Being human in a hyperconnected era . SpringerNature, 2015.[41]
Floridi, L.
On human dignity as a foundation for the right to privacy.
Philos-ophy & Technology 29 , 4 (2016), 307–312.[42]
Frankfurt, H. G.
The problem of action.
American philosophical quarterly 15 ,2 (1978), 157–162.[43]
Frankfurt, H. G.
Freedom of the will and the concept of a person. In
What isa person?
Springer, 1988, pp. 127–144.[44]
Fricker, M.
Epistemic injustice: Power and the ethics of knowing . Oxford Uni-versity Press, 2007.[45]
Garcia, J., and Fernández, F.
A comprehensive survey on safe reinforcementlearning.
Journal of Machine Learning Research 16 , 1 (2015), 1437–1480.[46]
Gauchou, H. L., Rensink, R. A., and Fels, S.
Expression of nonconsciousknowledge via ideomotor actions.
Consciousness and cognition 21 , 2 (2012), 976–982.[47]
Gauci, J., Conti, E., Liang, Y., Virochsiri, K., He, Y., Kaden, Z., Narayanan,V., Ye, X., Chen, Z., and Fujimoto, S.
Horizon: Facebook’s open source appliedreinforcement learning platform. arXiv preprint arXiv:1811.00260 (2018).[48]
Gergen, K. J.
The social constructionist movement in modern psychology.[49]
Gibbard, A., and Varian, H. R.
Economic models.
The Journal of Philosophy75 , 11 (1978), 664–677.[50]
Giere, R. N.
Scientific perspectivism . University of Chicago Press, 2010.[51]
Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., and Toombs, A. L.
The dark14 eyond Our Behavior:The GDPR and Humanistic Personalized Recommendation 3rd FAccTRec Workshop: Responsible Recommendation, Sept 2020, (patterns) side of ux design. In
Proceedings of the 2018 CHI Conference on HumanFactors in Computing Systems (2018), pp. 1–14.[52]
Greene, T., Shmueli, G., Ray, S., and Fell, J.
Adjusting to the gdpr: The impacton data scientists and behavioral researchers.
Big data 7 , 3 (2019), 140–162.[53]
Guo, Q., and Agichtein, E.
Ready to buy or just browsing? detecting websearcher goals from interaction data. In
Proceedings of the 33rd internationalACM SIGIR conference on Research and development in information retrieval (2010), pp. 130–137.[54]
Habermas, J.
Europe: The faltering project . Polity, 2009.[55]
Harrison, S., Sengers, P., and Tatar, D.
Making epistemological trouble:Third-paradigm hci as successor science.
Interacting with Computers 23 , 5(2011), 385–392.[56]
Hart, H. L. A., and Green, L.
The concept of law . oxford university press, 2012.[57]
Hassin, R. R., Uleman, J. S., and Bargh, J. A.
The new unconscious . OxfordUniversity Press, 2004.[58]
Heylighen, F., Cilliers, P., and Gershenson, C.
Complexity and philosophy. arXiv preprint cs/0604072 (2006).[59]
Hijmans, H., and Raab, C. D.
Ethical dimensions of the gdpr.
Commentary onthe General Data Protection Regulation, Cheltenham: Edward Elgar (2018, Forth-coming) (2018).[60]
Hildebrandt, M.
Smart technologies and the end (s) of law: novel entanglementsof law and technology . Edward Elgar Publishing, 2015.[61]
Honneth, A.
Recognition and justice: Outline of a plural theory of justice.
Acta Sociologica 47 , 4 (2004), 351–364.[62]
Houlgate, S.
The opening of Hegel’s logic: from being to infinity . Purdue Uni-versity Press, 2006.[63]
King, O. C.
Presumptuous aim attribution, conformity, and the ethics of artifi-cial social cognition.
Ethics and information technology 22 , 1 (2020), 25–37.[64]
Kitchin, R., and Dodge, M.
Code/space: Software and everyday life . Mit Press,2011.[65]
Knijnenburg, B. P., Sivakumar, S., and Wilkinson, D.
Recommender sys-tems for self-actualization. In
Proceedings of the 10th ACM Conference on Rec-ommender Systems (2016), pp. 11–14.[66]
Kramer, A. D., Guillory, J. E., and Hancock, J. T.
Experimental evidence ofmassive-scale emotional contagion through social networks.
Proceedings of theNational Academy of Sciences 111 , 24 (2014), 8788–8790.[67]
Lakoff, G., and Johnson, M.
Metaphors we live by . University of Chicagopress, 2008.[68]
Lupton, D.
The quantified self . John Wiley & Sons, 2016.[69]
Lynskey, O.
The foundations of EU data protection law . Oxford University Press,2015.[70]
Manders-Huits, N.
Practical versus moral identities in identity management.
Ethics and information technology 12 , 1 (2010), 43–55.[71]
Manicas, P. T.
A realist philosophy of social science: Explanation and under-standing . Cambridge University Press, 2006.[72]
Marcia, J. E.
Development and validation of ego-identity status.
Journal ofpersonality and social psychology 3 , 5 (1966), 551.[73]
Markus, H., and Wurf, E.
The dynamic self-concept: A social psychologicalperspective.
Annual review of psychology 38 , 1 (1987), 299–337.[74]
McAdams, D. P.
Personality, modernity, and the storied self: A contemporaryframework for studying persons.
Psychological inquiry 7 , 4 (1996), 295–321.[75]
McAdams, D. P., and Pals, J. L.
A new big five: fundamental principles for anintegrative science of personality.
American psychologist 61 , 3 (2006), 204.[76]
McNee, S. M., Riedl, J., and Konstan, J. A.
Being accurate is not enough: howaccuracy metrics have hurt recommender systems. In
CHI’06 extended abstractson Human factors in computing systems (2006), pp. 1097–1101.[77]
Miltenberger, R. G.
Behavior modification: Principles and procedures . CengageLearning, 2011.[78]
Mitchell, M.
Artificial intelligence hits the barrier of meaning.
Information10 , 2 (2019), 51.[79]
Mittelstadt, B.
Principles alone cannot guarantee ethical ai.
Nature MachineIntelligence (2019), 1–7.[80]
Nichols, D.
Implicit rating and filtering. In
Proceedings of the Fifth DELOSWorkshop on Filtering and Collaborative Filtering (1998), ERCIM, pp. 31–36. FifthDELOS Workshop on Filtering &; Collaborative Filtering.[81]
O’neill, O.
Autonomy and trust in bioethics . Cambridge University Press, 2002.[82]
Pariser, E.
The filter bubble: How the new personalized web is changing what weread and how we think . Penguin, 2011.[83]
Paton, H. J.
The moral law: Kant’s groundwork of the metaphysic of morals .Hutchinson University Library, 1948.[84]
Perelman, C.
The realm of rhetoric, trans.
W. Kluback, University ofNotre DamePress, Notre Dame (1982).[85]
Pitkin, H. F.
Rethinking reification.
Theory and Society 16 , 2 (1987), 263–293.[86]
Polkinghorne, D. E.
Narrative knowing and the human sciences . Suny Press,1988.[87]
Powers, W. T.
Feedback: Beyond behaviorism: Stimulus-response laws arewholly predictable within a control-system model of behavioral organization.
Science 179 , 4071 (1973), 351–356. [88]
Ricoeur, P.
The model of the text: Meaningful action considered as a text.
Social research (1971), 529–562.[89]
Ricoeur, P.
Oneself as another . University of Chicago Press, 1994.[90]
Rorty, R.
Philosophy and the Mirror of Nature , vol. 81. Princeton universitypress, 2009.[91]
Rosen, J.
The right to be forgotten.
Stan. L. Rev. Online 64 (2011), 88.[92]
Rouvroy, A., and Poullet, Y.
The right to informational self-determinationand the value of self-development: Reassessing the importance of privacy fordemocracy. In
Reinventing data protection?
Springer, 2009, pp. 45–76.[93]
Sherman, B. R., and Goguen, S.
Overcoming Epistemic Injustice: Social andPsychological Perspectives . Rowman & Littlefield International, 2019.[94]
Shmueli, G.
Analyzing behavioral big data: methodological, practical, ethical,and moral issues.
Quality Engineering 29 , 1 (2017), 57–74.[95]
Shneiderman, B.
Human-centered artificial intelligence: Reliable, safe & trust-worthy.
International Journal of Human–Computer Interaction 36 , 6 (2020), 495–504.[96]
Shoemaker, D. W.
Self-exposure and exposure of the self: informational pri-vacy and the presentation of identity.
Ethics and Information Technology 12 , 1(2010), 3–15.[97]
Singer, J. A.
Narrative identity and meaning making across the adult lifespan:An introduction.
Journal of personality 72 , 3 (2004), 437–460.[98]
Skinner, B. F.
Science and human behavior . Simon and Schuster, 1965.[99]
Stray, J., Adler, S., and Hadfield-Menell, D.
What are you optimizing for?aligning recommender systems with human values.[100]
Stryker, S., and Burke, P. J.
The past, present, and future of an identity theory.
Social Psychology Quarterly 63 , 4 (2000), 284–297.[101]
Taylor, C.
The malaise of modernity: The cbc massey lectures, 1991.[102]
Taylor, C.
The language animal . Harvard University Press, 2016.[103]
Taylor, J. K., Haider-Markel, D. P., and Lewis, D. C.
The remarkable rise oftransgender rights . University of Michigan Press, 2018.[104]
Thompson, J. B.
Critical hermeneutics: A study in the thought of Paul Ricoeurand Jürgen Habermas . Cambridge University Press, 1981.[105]
Tomasello, M.
Becoming human: A theory of ontogeny . Belknap Press, 2019.[106]
Van der Sloot, B.
Decisional privacy 2.0: the procedural requirements implicitin article 8 echr and its potential impact on profiling.
International Data PrivacyLaw 7 , 3 (2017), 190–201.[107]
Van Fraassen, B. C.
Scientific representation: Paradoxes of perspective . OxfordUniversity Press, 2010.[108]
Velleman, J. D.
Narrative explanation.
The philosophical review 112 , 1 (2003),1–25.[109]
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A.
Stacked denoising autoencoders: Learning useful representations in a deep net-work with a local denoising criterion.
Journal of machine learning research 11 ,Dec (2010), 3371–3408.[110]
Warnick, B.
The narrative paradigm: Another story.
Quarterly Journal ofSpeech 73 , 2 (1987), 172–182.[111]
Weiss, M. A., and Archick, K.
US-EU data privacy: from safe harbor to privacyshield. Tech. rep., Congressional Research Service, 2016.[112]
Yeung, K. âĂŸhypernudgeâĂŹ: Big data as a mode of regulation by design.
Information, Communication & Society 20 , 1 (2017), 118–136.[113]
Zarsky, T. Z.
Incompatible: the gdpr in the age of big data.
Seton Hall L. Rev.47 (2016), 995.[114]
Zhao, X., Xia, L., Tang, J., and Yin, D. " deep reinforcement learning for search,recommendation, and online advertising: a survey" by xiangyu zhao, long xia,jiliang tang, and dawei yin with martin vesely as coordinator.
ACM SIGWEBNewsletter , Spring (2019), 1–15.[115]
Zuboff, S.