Exploring Design and Governance Challenges in the Development of Privacy-Preserving Computation
Nitin Agrawal, Reuben Binns, Max Van Kleek, Kim Laine, Nigel Shadbolt
aa r X i v : . [ c s . H C ] J a n Exploring Design and Governance Challenges in theDevelopment of Privacy-Preserving Computation
Nitin Agrawal [email protected] of OxfordOxford, UK
Reuben Binns [email protected] of OxfordOxford, UK
Max Van Kleek [email protected] of OxfordOxford, UK
Kim Laine [email protected] ResearchRedmond, CA, USA
Nigel Shadbolt [email protected] of OxfordOxford, UK
ABSTRACT
Homomorphic encryption, secure multi-party computation, anddifferential privacy are part of an emerging class of Privacy En-hancing Technologies which share a common promise: to preserveprivacy whilst also obtaining the benefits of computational analy-sis. Due to their relative novelty, complexity, and opacity, thesetechnologies provoke a variety of novel questions for design andgovernance. We interviewed researchers, developers, industry lead-ers, policymakers, and designers involved in their deployment toexplore motivations, expectations, perceived opportunities and bar-riers to adoption. This provided insight into several pertinent chal-lenges facing the adoption of these technologies, including: howthey might make a nebulous concept like privacy computation-ally tractable; how to make them more usable by developers; andhow they could be explained and made accountable to stakeholdersand wider society. We conclude with implications for the develop-ment, deployment, and responsible governance of these privacy-preserving computation techniques.
CCS CONCEPTS • Security and privacy → Usability in security and privacy ; Social aspects of security and privacy ; Privacy protections . KEYWORDS privacy-enhancing technologies, expert interview, cryptography,policy
ACM Reference Format:
Nitin Agrawal, Reuben Binns, Max Van Kleek, Kim Laine, and Nigel Shad-bolt. 2021. Exploring Design and Governance Challenges in the Develop-ment of Privacy-Preserving Computation. In
CHI Conference on HumanFactors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan.
ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3411764.3445677
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from [email protected].
CHI ’21, May 8–13, 2021, Yokohama, Japan © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-8096-6/21/05...$15.00https://doi.org/10.1145/3411764.3445677
HCI research on privacy has traditionally focused on end-users:understanding their privacy attitudes and mental models, study-ing their privacy-related behaviours, and designing tools to helpthem manage data disclosure according to their preferences. Whileimportant, this paradigm of end-user privacy also has limitations.First, individuals may have their data processed in remote andopaque ways by dint of being taxpayers, credit risks, or suspectedterrorists — not ‘end users’ as traditionally conceived. In such caseswe still need to understand how privacy as a human right and pub-lic good can be reflected and governed in such systems. Second, theend-user privacy paradigm neglects the many other entities whoplay an important role in articulating, navigating, and embeddingprivacy in a range of contexts. If HCI is about reflecting human val-ues in computer systems more broadly [99], it is equally importantto study those people, whether they are developers [10], designers,risk managers, or policy makers. Finally, addressing privacy as aproblem of end-user interaction often yields depressing results dueto the sheer complexity of personal data processing making it dif-ficult for end-users to comprehend the choices and tools available.This is perhaps especially true when such complexity is the resultof modern cryptographic techniques designed to protect privacy[115].These three limitations are particularly salient in the contextof this paper, which addresses technologies for privacy-preservingcomputation . These are a subset of Privacy Enhancing Technolo-gies (PETs) which have emerged in recent years. These includeHomomorphic encryption (HE), secure multi-party computation(SMPC), and differential privacy (DP). These foundational technolo-gies share a common promise: to preserve privacy while also ob-taining the benefits of computational analysis. HE enables compu-tation on encrypted data, making it possible to outsource computa-tion to another entity without them ever having access to the inputdata in the clear. SMPC allows multiple parties to jointly performa computation based on multiple respective inputs without reveal-ing those inputs to each other. DP refers to a way of measuringthe extent to which the output of a computation reveals informa-tion about an individual, and a range of associated techniques forreducing it. While these technologies have been available in some
HI ’21, May 8–13, 2021, Yokohama, Japan form for years, and to some extent already are in deployment, re-cent progress in their foundational techniques and computationaltractability has led some to anticipate their imminent adoption. Compared to systems and contexts typically studied in privacy-related HCI research, these privacy-preserving computation tech-niques may be far removed — conceptually, operationally and expe-rientially — from the entities whose privacy they purport to pro-tect. While HE, SMPC, and DP are sometimes touted in the mar-keting campaigns of some device makers, for the most part thesetechnologies are deployed as invisible infrastructure rather thanbeing positioned as features which end-users are expected to value,let alone understand or control themselves. In many other (actualor envisioned) deployment contexts, the data being kept privatemay relate to individual data subjects who are not informed or en-gaged with its processing; and even if they were aware, they mayhave no ability to distinguish between whether such processingwas genuinely ‘privacy-enhancing’ or not. Furthermore, the math-ematical and computational complexity underpinning these tech-niques raises particular challenges to explaining them to variousstakeholders; not only end users and/or data subjects, but also de-velopers, investors, product managers, and policymakers.These differences make privacy-preserving computation tech-nologies a prime case study for an expanded understanding of pri-vacy within HCI beyond traditional paradigms of user attitudesand behaviours[41], to consider developers[3], managers, policy-makers and others [86], and the roles they play in defining and op-erationalising goals like security and privacy. For better or worse,the development and adoption of these technologies, and the po-litical values and consequences they reflect may ultimately haverelatively little to do with ‘end users’ as traditionally conceived.With these considerations in mind, this paper aims to explore thefollowing:(1) What challenges are associated with the adoption of privacy-preserving computation techniques for different stakehold-ers?(2) What are the motivations for adopting them?(3) Why and how should privacy-preserving computation tech-nologies be explained, governed, and made accountable todata subjects and wider society?To gain insight into these questions, we undertook a series of in-terviews with a variety of stakeholders involved in various ways inthe development and adoption of privacy-preserving computationtechnologies (PPCTs). These included cryptographers and theoret-ical computer scientists working on foundational PPC techniques,developers of practical tools and libraries for non-expert develop-ers, senior managers and policymakers assessing and identifyingreal-world use cases, practitioners building PPC products, and de-signers working with PPCs as a design material. Our aim was todraw out implications for HCI and design raised by this new classof technologies. Recent industry analyst reports have suggested that PETs are ‘experiencing arenaissance’[25], and that 2020 was the ‘the year of PETs’[29] We begin by briefly introducing emerging privacy-preserving com-putation techniques. We then situate our approach to studyingthem in relation to prior related work in HCI.
Privacy-preserving computation is a subset of Privacy-EnhancingTechnologies (PETs). PETs are a broad category which could in-clude everything from a sticker placed over a webcam [78] to ad-vanced cryptographic techniques [82]. Existing and well-establishedexamples include encryption schemes used to secure data at rest,end-to-end encryption protecting data over the network, and anony-mous routing protocols to prevent interactions between identitiesfrom being revealed. Such technologies are already widespread,embedded in products and as part of the global internet infrastruc-ture. While they each have different underlying approaches andmotivations, these technologies are primarily concerned with theprotection of data, at rest and in transit. They generally assumethat once data is safely transferred to a secure endpoint, it can bedecrypted and computed on in the clear; that a single entity per-forms the computation; and that whether or not the result of thecomputation is ‘private’ has a binary answer.A more recent wave of PETs — including homomorphic encryp-tion, secure multi-party computation, and differential privacy —allow these assumptions to be relaxed or even abandoned alto-gether. We briefly introduce them here.
Homomorphic Encryption . Informally, homomorphic en-cryption (HE) enables computation over encrypted data withoutever ‘seeing’ the input or the output. This is realized through aspecific encryption and decryption scheme. In effect, a user couldsend their encrypted data to a service provider who could then per-form the desired computation and send back the output to the user,while remaining oblivious to both the input and the output. Moreformally, homomorphic encryption is an encryption primitive thatenables secure evaluation of an arbitrary circuit 𝑓 on an encryp-tion 𝐶 ( 𝑥 ) of a plaintext 𝑥 , without decrypting 𝐶 ( 𝑥 ) in the process,and without requiring any information about the private key. Suchan encrypted evaluation results in an encryption 𝐶 ( 𝑓 ( 𝑥 )) [45, 94],which can at a later point be decrypted by the owner of the privatekey, to reveal the result 𝑓 ( 𝑥 ) , as if 𝑓 had been evaluated on theplaintext data. In principle, homomorphic encryption can be usedto evaluate any circuit on encrypted data, but often a weaker func-tionality called leveled homomorphic encryption is used instead,which allows only circuits of a predetermined (but arbitrarily high)depth to be evaluated on encrypted data. In practice, the encryp-tion scheme must be parameterised according to a desired depthbound of some interesting class of circuits. Homomorphic encryp-tion, and often leveled homomorphic encryption, has found its ap-plication in problems such as secure data retrieval [6, 7, 21, 119],outsourced computation [12, 66] and secure machine learning as aservice for sensitive data [46, 53, 97], amongst others. xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan Secure Multi-Party Computation . Secure Multi-Party Com-putation (SMPC) is a class of cryptographic primitives which en-ables secure evaluation of a function over data shared across mul-tiple parties. It was formally introduced in 1982 as a 2 party pro-tocol for the Millionaire’s problem [118]. Informally, SMPC primi-tives allow multiple parties to come together and jointly computea function on their combined inputs while remaining oblivious toeach other’s inputs; the Millionaire’s problem involves two partieslearning which has greater wealth without revealing their respec-tive fortunes. Formally, in an 𝑛 − party setting, party 𝑃 𝑖 possess aninput 𝑥 𝑖 and gets an output 𝑦 𝑖 upon computation of function 𝑓 over the combined set of 𝑥 𝑖 𝑠 ( 𝑖 ∈ { ...𝑛 }) . The secure computationguarantees privacy of the individual inputs 𝑥 𝑖 𝑠 . Most SMPC pro-tocols could be defined by the choice of the circuit for computinga particular function and the type of secret sharing scheme. Usecases for SMPC include secure operations over distributed sensi-tive data such machine learning [5, 44, 59, 84, 93], genomic com-parison [39, 61] and private set operations [55, 56]. Differential Privacy . : Differential Privacy (DP) [33] is aframework for sharing information based on a dataset while statis-tically limiting information exposure about the individuals in thedataset. More broadly, the idea of differential privacy is to deploy amechanism where the effect of a single substitution in a dataset isvery small. In effect, a query on a dataset with such a mechanismin place does not reveal anything substantial about a single individ-ual. Differential privacy may not always be considered a privacy-enhancing technology per se , but rather a theory for measuring privacy in a particular way. However there are several techniqueswhich are closely associated with differential privacy, all of whichinvolve adding noise to results according to differentially privateconstraints; we therefore refer to this family of techniques looselyas differential privacy technologies. Formally, a randomized func-tion 𝑓 gives ( 𝜖, 𝛿 ) -differential privacy for all databases 𝐷 and 𝐷 ′ ,non-negative values 𝜖 , 𝛿 and ∀ 𝑆 ⊆ range of 𝑓 , where 𝐷 and 𝐷 ′ differs by at most one record iff, 𝑃𝑟 [ 𝑓 ( 𝐷 ) ∈ 𝑆 ] ≤ 𝛿 + 𝑒 𝜖 𝑃𝑟 [ 𝑓 ( 𝐷 ′ ) ∈ 𝑆 ] Here 𝛿 and 𝜖 are the privacy parameters. Differential privacy isone of the more widely deployed privacy-preserving computationtechnologies. It can be applied to querying databases[62], build-ing differentially private machine learning models [2, 114] and per-forming statistical analysis [34, 35] with privacy guarantees. Morerecently, the US census used DP in 2020 , Apple has deployed localDP for a number of features and Google has been using DP for col-lecting data over its Chrome browser [38] in a privacy preservingmanner. Despite their differences,these technologies have all been classed as ‘tools for privacy-preservingcomputation’[108], which enable ‘the derivation of useful resultsfrom data without giving other people access’ to such data [102].These privacy-preserving computation technologies are still an emerg-ing technology. While significant theoretical progress has been made, this has yet to be translated into widespread adoption. How-ever, numerous libraries exist for HE, SMPC and DP , the num-ber of government-funded PPC projects is increasing [108], andvarious industry and policymaking forums have publicly heraldedtheir potential. Recent reports and working papers have cataloguedactual or potential use-cases, as well as noting possible usabilitybarriers including programming complexity, computational over-head, and parameter selection [20, 108]. Much research on privacy in HCI is concerned with how end-usersvalue, negotiate, and manage privacy in the context of their inter-actions with computers. Work in this vein involves: understand-ing the attitudes [67, 69, 101, 109, 110], expectations [9, 75] andmental models [64] of end-users regarding how their data is col-lected and used; studying privacy-related behaviours such as will-ingness to share data [4, 74] and use of protective measures [14];and evaluating and designing tools for privacy management suchas permission settings [73, 76], privacy notices [11, 98], and privacyassistants [77]. Related research in usable privacy and security ad-dresses the usability of various end-user PETs tools. These includeprivacy and security aspects of ubiquitous tools e.g. web browsers[26, 57], as well as more advanced specialist tools, such as end-to-end encryption [115] and anonymous communication and routingtools [22]. Such work is highly relevant to contexts in which endusers directly interact with systems in ways that may affect theirprivacy, and where there are opportunities to (re)design tools andinterfaces to give them more control. Such work is premised onthe ideal of individual users being able to understand at least someaspects of how their data is processed, and having the potential toexert some meaningful choices over it.In some cases, privacy-preserving computation technologies mightbe usefully studied from this end-user perspective. Bullek et al.[19] studied people’s comprehension of the randomized responsemethod for local differential privacy [113]. Participants were askeda series of questions, the answers to which were perturbed withnoise to provide privacy. In response to a final question about aparticularly sensitive topic, they were able to choose how muchperturbation to add (i.e. the value of 𝜖 ). While most participantsselected the lowest (most privacy-preserving) value for 𝜖 , surpris-ingly, 20% chose the highest (least privacy-preserving) value for 𝜖 .Some participants explained this was because adding more noisefelt like lying. Xiong et al. [117] also studied participants’ willing-ness to share data with a hypothetical differentially private system.They examined the effect of different descriptions of differentialprivacy (including real descriptions provided by technology com-panies and the U.S. Census bureau) on willingness to share, andtheir findings suggest that certain descriptions (in particular, impli-cation descriptions) are more understandable and increase willing-ness to share data as a result. Finally, Qin et al. explored usability For HE: Microsoft SEAL [1], HElib (https://github.com/homenc/HElib), PAL-ISADE (https://palisade-crypto.org); for SMPC: Crypten (https://crypten.ai),emp-toolkit [112], SPDZ (https://github.com/bristolcrypto/SPDZ-2); for DP:Google’s DP framework (https://github.com/google/differential-privacy), Diff-privlib (https://github.com/IBM/differential-privacy-library), Pysyft [96].
HI ’21, May 8–13, 2021, Yokohama, Japan and understanding in the context of privacy-preserving data aggre-gation initiatives based on MPC, finding that using various analo-gies to explain the process of additive secret sharing increased par-ticipants’ confidence in the scheme [92].However, in many contexts, the ‘data subjects’ are not co-extensivewith the ‘users’. In the case of the PPCTs mentioned above, theremay be several primary users (which may include developers andothers) and many wider ‘stakeholders’ (e.g. commercial and gov-ernment partners, the wider public). Rather than studying end userswho are also data subjects, then, we might instead follow previousHCI research on privacy and security which focuses instead onother actors, such as developers (e.g. [3, 8, 10, 48]). Balebako et al.note that while users may be concerned about privacy, they aregenerally not ‘empowered to protect themselves’; by contrast ‘thedecisions made by app developers have great impact’ [10]. Study-ing developers, designers, and others can reveal both practical andorganisational challenges hindering the deployment of privacy andsecurity technologies [3, 42], highlight discrepancies between pri-vacy research and privacy engineering [51, 68], as well as elucidatethe moral dimensions of design. While some studies of software de-velopers suggest that they may ‘not have sufficient knowledge andunderstanding of the concept of informational privacy’ [52], oth-ers do explicitly engage with the ethical and political ramificationsof their work; e.g. Rogaway [95] who acknowledges how the fieldof cryptographic privacy technologies has ‘an intrinsically moral dimension’.This kind of reflexivity on the part of developers and designers issomething acknowledged and addressed in approaches like ValueSensitive Design (VSD), which aim to ‘illuminate the ethical andmoral responsibility on the part of the designer rather than theuser’ [41]. To understand how particular technologies are imag-ined as solutions to problems [60], we may need to study a widevariety of actors involved in their development, not only engineersbut also those involved in the business of marketing them [86]. Byencompassing the full breadth of different actors involved in cre-ating and deploying these systems, we are also able to grapple indifferent ways with the trade-offs and tensions inherent in the fieldof privacy-preserving computation, and ask questions like ‘“Whois making the design decision?”, “Who is paying for it?”, “What isthis saying about the user?”’ [54].Finally, there is also work which critically addresses Privacy-Enhancing Technologies from a philosophical and conceptual per-spective. For instance, Tavani and Moor [107] assess how earlierPETs such as PGP and anonymity tools may address privacy asindividual control, but do not provide ‘external’ control beyondthe user, which they argue is necessary to protect privacy in theround. Gurses and Berendt point to the limitations of PETs thatstem from understanding privacy solely in terms of confidential-ity [50]. Stalder points to the ways that PETs designed for indi-vidual use may occlude broader social meanings of privacy [104],while Phillips notes how PETs designed to assist businesses withautomating compliance with privacy laws reinforce a restricted no-tion of privacy as unwanted intrusion [89].
Given that the technologies being addressed here are still emerg-ing, and the broad and exploratory nature of our research ques-tions, we chose to undertake in-depth semi-structured interviewswith a select range of experts from a range of backgrounds androles [15]. All had direct experience of working on projects re-lating to privacy-preserving computation, and occupied differentstrategic positions in the developing ecosystem. They included: re-searchers working across HE, SMPC and DP research; industrypractitioners and designers with experience delivering practicalapplications of these technologies; as well policy experts with ex-perience in PETs. We deliberately selected some experts whose ca-reers and roles bridged between the domains of research, indus-try, or policy, some having moved from one to the other over thecourse of a career, while others maintaining feet in multiple do-mains simultaneously. These participants can be seen as ‘boundaryworkers’, working between the boundaries of science and policyto facilitate the co-production of knowledge and innovation [58]and ‘knowledge brokers’ who facilitate connections between scien-tific and other audiences [83]. Including a variety of different rolesalso reflects the nature of these technologies as ‘use-inspired ba-sic research’[87] operating between ‘basic’ and ‘applied’ researchparadigms [106]. This enabled us to not only understand how theknowledge surrounding these technologies is made in specific places(e.g. research labs, technology companies, government) but also‘how transactions occur between places’[100].Because these technologies are still emerging, we inevitably couldonly draw from a small class of professionals, whose roles in theproduction of these technologies are to some extent ill-defined. Asis typical with expert interviews, there was no comprehensive listof relevant experts to sample from; we therefore built a ‘sampleframe’ based on publicly available materials from a wide variety ofsources including research papers, industry and policymaking fora,and press [47], to identify potentially relevant experts, and alsoused snowball sampling. As a result of the variety of roles and ex-periences of our 9 experts (see Table 1), we used a semi-structuredinterview format slightly tailored to four different roles (research,industry, policy, design). We invited participants to discuss theirexperiences, motivations, perceived opportunities and barriers, re-lating to this space. Open ended questions allowed us to have rela-tively free reign to explore these issues [88]. The interviews wereconducted over video chat during Spring and Summer 2020. The 9interviews varied in length between 35-75 minutes, with the aver-age taking 55 minutes, producing 8.3 hours of audio recordings intotal, which were transcribed. All parts of the study were approvedby our institution’s ethics review committee. We used thematicanalysis [17] to identify key themes and ideas discussed by experts.Two of the authors independently developed a set of codes basedon close reading of disjoint subsets of the interview transcripts, us-ing an open coding process. The two authors then discussed andconsolidated their codes to derive a common set, which was thenapplied by both authors to all the interview transcripts, and memonotes were taken to record observations about the codes and theirrelation to one another [72]. A final round of discussion based onthis data resulted in a set of themes and sub-themes, presented inthe following section. xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan
Participants Description (& years experience in PETs)P1[R] Cryptography Researcher and an industrial PETs li-brary developer (20-30y experience)P2[R] Cryptography Researcher at company specialisingin privacy preserving computation (5-10y experi-ence)P3[R] Cryptography Researcher and an industrial PETs li-brary developer (5-10y experience)P4[P] Law and Policy professional working on industrialadoption of privacy-preserving ML (4-6y experi-ence)P5[P] Senior government adviser on technology, withstrong interest in PETs and their applications (5-10yexperience)P6[I] Security and Privacy Researcher working in execu-tive role at large tech company (2-5y experience)P7[I] Data Scientist working on privacy at a ‘big four’ ac-counting firm (5-10y experience)P8[I] Researcher at consultancy specialising in privacypreserving computation as a service (2-5y experi-ence)P9[D] Designer at a tech and design agency specialising inethical use of data & AI (2-5y experience)
Table 1: A summary table of the total participant samplefrom Research (R), Policy (P), Industry (I) and Design (D).
We divide the findings from our interviews into two main areas.The first addresses the technical challenges and opportunities aroundthe adoption of privacy-preserving computation techniques - pri-marily concerning their transition from theoretical research intopractical application. The second concerns the motivations andgoals for deploying these techniques to address commercial andsocietal goals, and how the institutions that deploy them mightexplain and be held accountable for their use.
Many participants brought up howadvances in the theoretical grounding on which privacy-preservingcomputation techniques are based may not translate straightfor-wardly into specific real-world applications. This was acknowledgedby both research scientists working on those foundations, and prac-titioners attempting to deploy the technologies in particular con-texts. Many participants expressed confidence that those theoreti-cal advances would translate into practice in time. P3, a researcher,argued that while it is ‘early stages, from the business developmentperspective of homomorphic encryption’ , he was nevertheless ‘con-fident that [the] technology is useful, practical’ (P3[R]).It was generally accepted that much of the research had onlyreached the stage of proofs-of-concept rather than deployments;while ‘doable in principle’ , ‘there is a lot of work to do before that bigger picture potential can be realized’ (P1[R]). Similarly, P9[D] ex-plained that as a designer she was ‘trying to make them more widelyunderstood within the design and tech community ... There’s lots ofresearch in academia at the moment, but not very many examples ofthem being used in practice’ .The policy experts we interviewed were optimistic that the tech-nology was already nearly ready for practical deployment. For P4[P] ‘the technology has scaled to the point that ... it’s definitely commer-cially deployable’ . For P5[P], while practical deployment would re-quire a series of ‘reasonable engineering and architectural compro-mises’ , he was still optimistic that ‘existing approaches to homomor-phic encryption are tractable’ .While both research scientists and policy experts were optimisticabout the big picture, those trying to bridge theory and practice onthe ground expressed frustration that much of the research wasnot directly relevant. In some cases, this was because the scientificwork made simplifying assumptions that were rarely satisfied inreal use cases. In the context of trying to apply differential privacytechniques to a project involving time series data, P7[I] admittedthat he ‘really struggled, you know, seeing the value in all those tech-niques that academia likes to talk about ... how am I going to use thatwith time series?’ .Similarly, several participants pointed to the variety of messyunderlying data and software issues that exist on the ground thathamper deployment. P7[I] explained that the initial challenges arearound ‘how do we list data assets, and manage access, at an en-terprise scale?’ ; while P8[I] spoke of clients with various customsystems and data formats, so ‘while we’re solving the security as-pect of the communication between counterparties ... we still haven’tfixed this engineering problem - it’s not going away’ .P2[R], a research scientist who had worked on both founda-tional theory and engineering, explained how applying techniquesin practice involved moving carefully between theory and engi-neering:‘Engineers do need to learn a lot to deploy these kindof technologies and what makes the whole thing com-plicated is that they need to acquire a kind of knowl-edge that is not something that the university profes-sor knows ... a lot of low level optimizations makea huge difference, and yeah, from theory to practicethey need to somehow be invented.’In P2’s opinion, such work would not come from a ‘linear trans-mission of knowledge’ , but rather through continuous iteration and ‘course correction’ between theoreticians and engineers. As well astheoreticians and engineers to bridge the gap between theory andpractice, several participants discussed the need for people withdifferent skills, backgrounds and motivations to work together. Thisincluded not only the combinations of different expertise involvedin foundational privacy-preserving computation research such asmath, statistics, and cryptography, but also specialists in specificapplication domains. As P2[R] noted, successful deployment de-pends on a ‘component of multidisciplinarity’ :‘In my experience this is very hard to get the rightpeople and crucially, with the right incentives in the
HI ’21, May 8–13, 2021, Yokohama, Japan same room ... You not only need data scientists, butalso security engineers, mathematicians and then ex-perts in the application domain.’The differing motivations and cultures of these different commu-nities was seen by some as as a problem as it leads to certain impor-tant problems being neglected. Commenting on the misalignmentbetween incentives of academic researchers and industry, P3(R),who had worked in both sectors, lamented how research is ‘drivenby the need to get published; it favors more ... performance break-throughs, functional breakthroughs’ , meanwhile, topics like usabil-ity are ‘not so interesting for the basic core research community’ .The need for an even broader range of disciplinary expertise andprofessional skills was articulated by P4[P], who described her roleas ‘to bridge the lexical gap between technologists, lawyers, and pol-icymakers to defragment the current initiatives in PETs’ . Drawingfrom previous experience working on AI in government, where ‘insulated development’ led by technologists failed to account forthe ‘constitutional implications’ of these technologies, she warnedthat ‘the same could happen for PETs without this sort of ... interdis-ciplinary discourse’ . A common theme among both re-searchers and industry practitioners was the complexity of apply-ing privacy-preserving PETs from a software engineering perspec-tive. They discussed a set of inherent challenges facing developersaround flexibility, performance, and specifying appropriate param-eters.Several participants described how PPCTs, in particular homo-morphic encryption, can be very ‘brittle’ (P1[R]): small changes inparameters can result in drastic reductions in performance, secu-rity, or privacy guarantees. Such sensitivity can be hard for devel-opers to anticipate and manage, especially as there are many dif-ferent parameters to tune. This was contrasted against other PETs,like public key cryptoschemes, where there is one main parame-ter — key size— which has a fairly predictable relationship withsecurity guarantees and computational overheads:‘I mean RSA, you have the bit length and that’s prettymuch it. These things [homomorphic encryption ap-plications] you have a ton of decisions to make whenit comes to how to instantiate it, and they have impli-cations for both speed and for the actual function thatyou will need to compute.’ (P1[R])This results in problems for developers not just in the initial im-plementation of a privacy-preserving technique, but also as they in-evitably need to update a system to ‘evolve when you need to changevarious details ... performance and tractability can be so highly de-pendent on small details’ (P1[R]).Participants articulated this as a trade off between approaches tobuilding PPCTs that either work out-of-the-box but have poor andunpredictable performance, or that have reasonable performance,but require fine-tuning by engineers. For instance, while the inven-tion of fully homomorphic encryption enables both addition andmultiplication and therefore arbitrary computation, specific appli-cations still need to be converted into those arithmetic operationsand may incur great computational costs depending on how that is implemented. While it may be possible to ‘come up with a sys-tem with adequate performance for your application’, this oftenrequires having an application which is ‘fully specified and welldefined in mind, and you have a team of experts working for you’ (P1[R]).Several participants spoke about the development of privacy-preserving computation software libraries for developers (see 2.1.4),often contrasting two approaches which reflect the trade-off articu-lated above: on the one hand, libraries which create an abstractionlayer which obscures the underlying complexity; and on the other,libraries which expose all of that complexity so that the developersstill need to create bespoke solutions for their application context.P1[R] noted that many developers expect a library to provide ‘ab-stractions that are convenient’ ; otherwise ‘It’s like telling people: OK,I’ll give you transistors and you’ll build from them ... people don’tthink this way and for good reason’ . P2[R] made a comparison tomachine learning frameworks (e.g. Tensorflow and Scipy):They have a very nice abstraction layer that allowsthem to say "OK, here’s my function over the reals:optimize it, under the hood"... We will have worriedabout implementing matrix multiplication super quicklyover floating point numbers so that the data scientistscan assume [it’s] like doing math on their on theirnotebook, right? This level of abstraction doesn’t ex-ist yet [for privacy-preserving computation]However, P2[R] cautioned against such an approach for privacy-preserving computation libraries, because it would preclude ‘a lotof optimizations that come from understanding the underlying pro-tocol’ . P1 echoed this, stating that:‘The only way that we know now of making the com-putation go reasonably fast is to use a lot of tricks andthe developer needs to know about those tricks’As a result, P2[R] felt that ‘general purpose tools’ would inevitablyfail to meet developer’s performance expectations and thus givethem a mistaken impression about the true potential of PETs.While many participants were in favor of some form of stan-dardisation via libraries, P8[I] explained that the prospects for astandard platform depends on where the technologies are beingdeployed to. In the context of SMPC, because smartphone operat-ing system providers ‘control the platform, they can decide ... this ishow it’s going to work’ ; whereas P8[I]’s work involved deployingSMPC into a wide range of different clients’ environments where ‘we can’t really dictate to them how they store their data’ ; as a resultthe possibility for standardisation was small. Our participants also raised various insights relating to the moti-vations for adopting PPCTs, and challenges relating to explanationand accountability.
Unsurprisingly, ‘privacy’ was often cited as themotivation for developing and deploying privacy-preserving com-putation technologies. However, some subtly different articulationsand understandings of privacy emerged from our interviews, as xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan well as some other motivations which went beyond privacy alto-gether.Some very directly motivated the adoption of privacy-preservingcomputation by reference to the interests of individuals in privacyand the protection of their personal data: ‘it’s individual privacy- it’s human rights’ (P6[I]). In comparison to the push for simi-lar technologies in other markets, privacy-preserving computationwas more a response to individual privacy:‘People do understand when their privacy is violated.So ... the push for these technologies is very differentto the push the semiconductor industries have had ...So I give a lot of time on the examples that are user-centric’ (P6[I]).However, appeals to individual privacy were often mediated viaother pressures. First, organisations deploying PETs may not havea direct relationship with those individuals, but are instead con-cerned with third-parties affected through business-to-business re-lations:‘You have customers: these may be business to busi-ness customers, but that also extends to customers ofcustomers and therefore it boils down to individuals’(P6[I])Second, some cited the existence of privacy and data protectionregulation as an incentive to provide and deploy PETs: ‘because ofGDPR [the E.U. General Data Protection Regulation] ... all the regu-latory environment is ... very favorable for providers’ (P7[I]). Thisregulatory pressure meant that investment in PPC could be ac-counted for in terms of corporate risk management: ‘to have com-pliance at least formally speaking with GDPR ... it’s really protectingassets of a company’ (P6[I]). Third, P9[D] argued that rather than just enabling existing dataprocessing to be done in a more privacy-preserving way, thesetechnologies could enable new insights which ‘you might not havebeen able to gain before because of the sensitivities around the datathat you are using’ . P4[P] highlighted a range of ‘missed oppor-tunities’ for privacy-preserving computation ‘for a good purpose’ .These included cases such as the Boston Women’s Workforce Coun-cil who ‘used secure multiparty computation to confidentially ana-lyze gender wage gaps without ... disclosing who the salary belongedto’ . P5[P] noted the opportunities for government national secu-rity services to use HE techniques like private set intersection toidentify suspects without combining certain databases in the clear,something that might not otherwise be undertaken due to the ‘in-trusiveness’ of sharing data of large numbers of innocent citizensbetween departments. While individual privacy was cited by all participants as an im-portant motivator, it was often an indirect motivator, and in somecases perhaps insufficient on its own (e.g. without being coupledwith new opportunities to extract value from data). Other partici-pants articulated motivations for pursuing privacy-preserving com-putation which had nothing to do with individual privacy as such. A sentiment echoed in [20] https://thebwwc.org/mpc A use case discussed in [27].
For example, for some researchers (e.g. P1, P2), it was basic intel-lectual curiosity ( ‘somebody thinks of something that ... looks inter-esting to them’ (P1)). Other cases included where competing busi-nesses would have a mutual interest in the output of some compu-tation on their respective data, but would not otherwise share thatdata out of ‘fear of losing a competitive edge’ (P6[I]). Intellectualproperty protection was also frequently cited as a key motivationfor many business applications.Privacy-preserving computation techniques were also seen bysome as offering the possibility to navigate regulatory obligationsand trade-offs in different ways. First, they have the potential to ful-fill obligations to protect data in new, more ‘technological’ ways,offering ‘technological safeguards that can’t be easilly overridden’ ,the kind of protection that ‘paper safeguards, like contractual guar-antees and policies, just can’t provide’ (P4[P]). They were seen as es-pecially promising in cases where different regulatory obligationsmight appear to be in conflict, as P4[P] explained:‘Anti money laundering regulations are very data max-imalist; they want you to collect more data [to pre-vent] financial crimes. But in the meantime the GDPRis quite the opposite; it wants you to minimize data,... and this really conflicts with the regime of AML. Ithink that PETs could actually cut through these le-gal conflicts and really provide a practical solution ...it’s not actually transferring PII, but it still allows forbanks to prepare for AML protocols’Similarly, for P5[P], privacy-preserving methods had the capac-ity to change what is possible without sharing data and therebyshift the scales in legal balancing tests [18] that might otherwisemake certain data analysis unlawful:‘ UK law ... sets out a test for those of us in nationalsecurity which is necessity and proportionality . So ifyou can shift the proportionality, then you’re in a bet-ter position so you can avoid intruding, you can avoidprivacy risk’.In these ways, such techniques were envisioned by P4[P] andP5[P] as enabling organisations in the public and private sector tobreak free of what P4[P] called ‘legal gridlocks’ that currently are(or are perceived to) exist around data use and enable new kindsof analysis. Our participants discussed various facets relat-ing to explaining privacy-preserving computation, including how they go about explaining it to different audiences (and in somecases, why they don’t even try).The researchers described a variety of contexts in which theyhad had to explain underlying techniques and their strategies fordoing so. For a general audience, P2’s strategy was to explain sim-plified versions of protocols, such as simulating a secure multi-party computation for dating using playing cards (see [80]). Whilethese were ‘fun to explain’ , P2[R] was unsure about the effective-ness of such explanations: HI ’21, May 8–13, 2021, Yokohama, Japan ‘Then in the future, [the audience] will be like: ”Ohyeah, multi-party computation, the thing with the cards.”That doesn’t mean that my explanation was effec-tive... My feeling is that people tend to end up amusedand satisfied.’ (P2[R])Such explanations were offered as a starting point to encour-age people ‘who are attracted by that kind of magic’ and would ‘go into Wikipedia immediately after’ (P2[R]). However, P3[I] feltthat there was a lack of accessible educational material: ‘there iscertainly not enough material and the classical crypto papers are es-sentially useless for someone who is not an encryption expert’ ; theysuggested that explanations of core concepts might be more effec-tive if tackled as part of a standardisation process and includedwithin libraries.Several participants also cautioned that the kind of explanationsoffered (if any) need to be tailored to the audience. On the one hand,explanations could be too technical: ‘If you start with equations ...you lose 99 percent of the audience right away’ (P7[I]). On the otherhand, short intuitive explanations might be too simplistic for in-forming executive decisions:‘So one thing is getting people interested, and theother one is informing, like, executive decisions. Idon’t think they should be informed ... by two minutestories... I don’t think decisions about encryption aremade based on an intuitive understanding of crypto’For P9[D], designers have a role to play in explaining privacy-preserving techniques through prototyping their use in specificcontexts. This included explanations to end users, but also ‘a differ-ent language to explain it to those designers as well’ . Previously, theirdesign agency hadn’t ‘seen much demand for them on the industryside’ ; however, that changed after publishing a blog post explain-ing visually how differential privacy could work in the context ofa project on identifying inequalities in urban mobility:‘Each step of the randomized response process ... wehad an image to go with it, so that you could see ... thenoise that you are adding to data. Visually seeing itwas really helpful for me as a designer and then tyingit to sort of real life stories so that I could see how youwouldn’t be able to re-identify someone. Imaginingwhat that makes possible forces you to think aboutthe qualities of that technique, what it now enablesyou to do’When it came to explaining these systems to end-users, how-ever, some participants questioned whether this was a worthwhilegoal. P9[D] couldn’t imagine ‘many scenarios where it’s necessary toexplain what privacy preserving techniques are being used to an enduser who is trying to do something with their phone’ . P7[I] askedhimself whether end-users understood these techniques, and an-swered: ‘Well in general, not. Is it a problem? I’m not sure it’s a prob-lem’ . In such cases, it was seen as sufficient that end-users ‘trustthe provider of the solution that they do a good job’ (P7[I]).
A final theme was aroundthe challenges of governance and accountability of privacy-preservingcomputation. These topics often followed organically from discus-sions of explanation; attempts to explain these systems were often made in the course of trying to justify their use to affected stake-holders, and justification is a key element of accountability [16]).But even if explanations don’t lead to real understanding on an in-dividual level, it might still be possible to justify them to the public.P5[P] put it this way:‘These technologies are extremely difficult to under-stand... Do they meaningfully address genuine pri-vacy issues? Yes they do. Do they address public con-cern? That’s not to do with the technologies per se ,[but] how the technologies are explicated and madeavailable. If you told the public: "As a result of usingthese technologies, we are able to limit the amountof your personal information that’s shared, and arestill able to offer you valuable services", they wouldbe enthusiasts.’Other participants expressed scepticism that the public wouldtake such guarantees at face value. In the context of proposals forprivacy-preserving facial recognition in border control, P6 asked:‘if someone publicises this new system ... just by say-ing: "and by the way the privacy of the informationis very well handled because we use the state of theart cryptography", what does that mean to a citizen?’Both P6 and P7 suggested that certifications and trust marksapplied to services which use these techniques could enable indi-viduals to seek out more trustworthy systems. However, expect-ing individuals to exercise meaningfully informed choices in re-lation to different services involving privacy-preserving computa-tion was seen by some as adding to the burden of responsibilityunhelpfully placed on individuals. P9 reflected on how ‘constantlymaking decisions about data in the technology that we use is just notsustainable’ ; instead, they suggested that ‘collective consent mod-els and other governance mechanisms ... that can make decisions onbehalf of people’ might be a better approach. Similarly, P2[R] feltdecisions about the technical details of the adoption of these tech-nologies ought to be made by ‘using experts or authorities’ who canact as ‘proxies ... [who] understand their communities’ .While most of our participants pointed to the positive poten-tial of privacy-preserving computation techniques, a few were alsoconcerned about the power imbalances they might reinforce. Whenthe stakeholders are individuals, they are ‘by definition, the weakerparty’ , and ‘lack the resources ... to induce changes; every time wetalk about privacy there is some asymmetry that is implied by it.’ (P1[R]). For P9[D], it is important to recognise the limitations ofPPCs as they are just:‘a technical solution to protecting people’s privacy ...you have to think about the wider system that theysit within and what other kind of power dynamicsare in that system.’ (P9[D])
The findings from our interviews raised several important implica-tions for the design and governance of privacy-preserving compu-tation. They reveal how these techniques are being not only tech-nically but also socio-technically constructed and constituted by a xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan variety of actors, each pursuing overlapping and sometimes diverg-ing agendas. Clearly, privacy-preserving computation techniquesentail a variety of human-centric challenges which HCI researchcould seek to address. These challenges are multifaceted and willrequire diverse approaches; something that HCI as a methodologi-cally diverse field is well-positioned to reflect. Furthermore, thesechallenges are inter-related: for instance, the way in with thesetechnologies are translated from theory to practice may well af-fect how they can be explained and held accountable; while closerinspection of how ‘privacy’ and other motivations are unpackedmight reconfigure what kinds of interdisciplinary collaborationsare required in a particular context. Our aim in this section is toreflect on these, to understand both the design problems facingthese techniques, and the challenges they raise in relation to theinterests of a variety of users and wider society. This discussion isnot intended as direct ‘implications for design’; rather, we hope todraw attention to issues which require further research, as well asinterdisciplinary discussion.
While our experts generally acknowledged the individuals whosepersonal data is being privately computed on as an important stake-holder group, few seemed to prioritise seeking their understand-ing and acceptance. This is in contrast to the small number ofexisting HCI studies that investigate ‘user acceptability’ of partic-ular privacy-preserving computation techniques such as differen-tial privacy [19, 117] and MPC [92]. User acceptability could andshould be further examined in particular contexts; for instance,Colnago et al. suggest further work is needed to explore whethersuch techniques embedded in Internet-of-Things privacy assistantsmight ‘help mitigate people’s reservations about data collectionpractices and reduce the chance they opt out’ [24]. There is clearlygreat scope for important research within this paradigm of useracceptability.However, our experts spoke about privacy-preserving computa-tion technologies more as tools enabling organisations to achieve avariety of goals (including managing privacy risks, but also protect-ing corporate assets and secrets), rather than as a means of directlyserving users’ interests. While user acceptance was not entirelydisregarded, it did not appear to be a primary concern; even P9,a designer well-versed in user-centred design, doubted that peo-ple could or should be expected to understand and make decisionsabout privacy-preserving computation. Privacy may be important,and these techniques may have the potential to meaningfully em-bed it, but whether or not individuals understand and accept themseemed to be almost a secondary issue. In many of the use casesthey mentioned, individuals whose data is being computed maynot have any direct interaction with the system, nor any choiceabout whether to use it. In expressing such doubts, our intervie-wees might appear to be denying a sacrosanct tenet of HCI as a human -centred discipline. However, rather than denying the im-portance of user acceptance, we believe that these doubts shouldin fact point us towards alternative human-centric approaches tothe development of privacy-preserving computation, in additionto solely looking at end users as data subjects. First, our findings point towards studying the needs of differ-ent kinds of end users; specifically, those developers and design-ers who attempt to apply foundational privacy-preserving com-putation techniques in real-world applications. This echoes recentcalls to acknowledge that ‘developers are users too’, as Green andSmith argue in relation to crypto and security libraries [49]. Simi-larly, P9[D] pointed to the relative lack of awareness and under-standing of these techniques among designers. As with the ap-plication of other complex methods in computer science, such asmachine learning, it may be difficult for designers to use privacy-preserving computation techniques in design practice due to un-familiarity with how they work and awareness of what they canachieve[32]. P9[D] made the case for technical specialists and de-signers to work together to translate these technologies into ‘de-sign material’ which design practitioners can use to explore realuse cases.In addition to understanding developers, designers, and othersas users of privacy-preserving computation techniques, studyingthem also allows us to explore how a human value like privacyshapes the construction of complex computational systems. Thisperspective accords with ‘third wave’ approaches to HCI whichorient attention towards the ethical obligations and values of de-signers [41], and incorporate different disciplinary perspectiveswhich examine how social and political dimensions are embed-ded and reflected in systems [13]. As such, rather than just con-sidering whether end users or laypeople understand, trust, and ac-cept privacy-preserving computation technologies, we might alsobenefit from considering the perspectives of the various people in-volved in constructing their technical, commercial and regulatoryfoundations. Assessing whether an innovative technology will beacceptable to users through lab and field studies may be valuable,but such approaches often neglect the ways in which such tech-nologies are interpreted, shaped, and mutually constructed overtime through their designers, users, and broader political, economicand regulatory forces[60, 79]. As a result, it is equally importantto consider the plurality of different actors and broader contextsthrough which values like privacy will be understood, traded-off,and embedded in these systems (or not).
If, as suggested above, we are to consider the needs of developersand designers as users of underlying privacy-preserving compu-tation techniques, then how might those needs be met? Many ofthe interviewees identified the need to create building blocks forprivacy-preserving computation. In an ideal world, these buildingblocks would allow developers to abstract away the technical de-tails and apply them to applications in different contexts. Creatingsuch abstractions is fundamental to progress in computer scienceand programming; in Edsger Dijkstra’s words, it is ‘our only men-tal aid ... to organize and master complexity’ [31]. However, manyof the experts expressed uncertainties about the form such abstrac-tions should take and the extent to which they could reasonablybe made in the domains of privacy-preserving computation. Espe-cially with homomorphic encryption, abstracting away the detailsof implementation could mean losing the ability to optimise per-formance through engineering ‘tricks’ (P1[R]).
HI ’21, May 8–13, 2021, Yokohama, Japan
Attempts to create tools for developers to enable them to inte-grate privacy-preserving computation techniques into their prod-ucts may therefore need to grapple with this need to balance ab-straction and engagement with the implementation details. Spe-cific applications will always require some ‘intimacy with the de-tails’ [105] that might otherwise be abstracted away. Some of ourinterviewees argued that the necessary education required for de-velopers could potentially be integrated into standardised APIs.This suggests that broader adoption of privacy-preserving compu-tation may benefit from work in HCI which considers APIs andlibraries as ‘first class design objects’ [85, 120], with the goal of‘driving adoption of software components’ [81]. This could involve(re)designing them around the typical ways programmers learn,e.g. on-the-fly, via information foraging, and trial and error [65,71].However, the nature and extent to which developers need tobecome intimate with the details, and how they might do so, willclearly depend on the particular technique in question. For instance,a DP library might implement a variety of noise sampling and injec-tion techniques, but this is relatively simple compared to the muchmore complex mathematics and reasoning involved in deciding onand managing an appropriate privacy budget, which requires case-by-case human consideration. For SMPC, libraries might take careof some of the networking details, but leave difficult decisions re-garding the protocol up to the developer. The nature and value ofthese standardised building blocks will therefore vary greatly be-tween approaches.Ultimately, the design and adoption of these privacy-preservingcomputation building blocks may need to reckon with the messyrealities of underlying enterprise IT infrastructure, agile and itera-tive approaches to software development [51], and service-orientedarchitectures [68]. Given these practical considerations, the fullcomplexity of these technologies might instead need to be medi-ated via a two-step process: general-purpose libraries which ex-pose all of the complexity of a domain (e.g. homomorphic encryp-tion) that enable specialist privacy engineers to create particularprivacy-preserving computation components for common opera-tions or use cases (e.g. private set intersection for contact discov-ery); those components could then be adapted and deployed withminimal configuration by non-specialist developers as microser-vices.
The way privacy-enhancing technologies are sometimes describedcan make them seem esoteric, exotic, and mysterious. For instance,in industry press they have been described as ‘black magic’ anda ‘holy grail’ . Such language suggests that their development isentirely in the hands of a small and specialised cabal of cryptog-raphers and engineers, much like the early programmers who re-garded themselves as ‘high priests’ of assembly code. It is possi-ble to imagine how in these respects, they might end up sharingthe same ‘rampant hyperbole and political envisioning’ [37] of ahigher-profile cryptographic technology — blockchain. https://dualitytech.com/tag/homomorphic-encryption/ In the words of Rear Admiral Grace Hopper [116]
While our interviewees avoided such language, and even criti-cised the perceived ‘hype’ around PETs, they did reflect the highlyspecialised knowledge required to make use of the underlying math-ematics, and drew parallels with magic. P6[I] described feeling like‘Gandalf the wizard’ upon telling people that computation on en-crypted data was possible, while P1[R] described the need for en-gineering ‘tricks’ to optimise performance within reasonable lev-els. From this perspective, the technical work of applying privacy-preserving computation seems more like craft than science, whichthe guild of cryptographers and engineers are uniquely capable ofperforming [91].However, the mystery of their inner workings could easily serveas an excuse for not making these systems accountable to affectedstakeholders. When reflecting on the challenges laypeople face intrying to understand PPCTs on any meaningful level, both P5[P]and P7[I] expressed some doubts about the possibility that individ-uals could ever be expected to really understand how they work.However, without some form of explanation, and absent any othermechanism for meaningfully communicating their risks and op-portunities, there is a risk that privacy-preserving computation be-comes not just a technical but a technocratic solution imposed onpopulations without popular consent by grey eminences operatingbehind the scenes.However, several of the experts did acknowledge the need formechanisms of accountability and governance to developed as thesetechnologies are rolled out. P6[I] and P7[I] suggested this could in-volve certification schemes. Similarly, while P1[R] and P9[D] weredoubtful about individuals being able to meaningfully consent tothese technologies, they proposed alternative forms of collectivegovernance, where the interests of affected individuals could berepresented by relevant representatives and experts who can makeinformed choices and demands on their behalf. These and otherdemocratic mechanisms will need to be explored in order to countera privacy-enhanced technocracy, and methods from HCI — such asparticipatory design [36], futures workshops [63], and other gov-ernance approaches — may have much to offer.
Our findings attest to the many varied interpretations and uses ofthe term privacy. As previous work has explored, and as discussedabove, the notion of privacy in Privacy-Enhancing Technologies isoften a narrow interpretation of what is a multi-faceted and con-tested concept [50, 89, 104, 107]. This is certainly the case for thesubset of privacy-preserving computation PETs studied here. Theyturn privacy into something mathematically formalisable, e.g. interms of entropy in cryptographic approaches, or indistinguishabil-ity in statistical approaches, which can all be understood as vari-ations of ‘confidentiality’, a pillar of the security triad [30]. Thismeans that other ways of understanding privacy may be de-emphasisedand de-prioritised.There are continuities here with earlier PETs, such as de-identificationtechniques based on hashing personal data. Phillips argues thatthat these techniques embody privacy as protection ‘from unwantedintrusion’[89]. However, they leave in place the ability of power-ful observers to produce ‘panoptic’ knowledge which can be used xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan to sort and discipline populations [23, 43]. Similarly, if we under-stand privacy as confidentiality, this can be engineered througharchitectures of data minimisation [103]; but this can lead to de-sign choices which preclude alternative understandings of privacy(e.g. privacy-as-control), and hinder the exercise of related rightsafforded by data protection law [111]. In our expert’s discussions,these alternative understandings of privacy were conspicuous bytheir absence.Our findings also demonstrate that even while discourse aroundprivacy-preserving computation restricts certain interpretations ofprivacy, it also stretches the meaning of privacy to incorporateunorthodox meanings, such as competitive secrecy, corporate as-set protection, and government security. These are clearly signifi-cant and important use cases for the technology, but they arguablybear only a family resemblance to privacy as it relates to individ-uals and society. Indeed, intellectual traditions which value pri-vacy as an individual right and public good have often been as-sociated with opposition to corporate and government secrecy; ac-cording to them, privacy should be reserved for the weak, whiletransparency should be an obligation required of the strong [28].In referring to all of these things as ‘privacy’, privacy-preservingcomputation technologies may elide significant political tensionsbetween them. This is not to deny that they may have a powerfulrole to play in supporting privacy as an individual right and as apublic good [40, 70]; but this confluence of quite different valuesunder one banner complicates the narrative around whose inter-ests they serve.As well as tending to address narrow and perhaps unorthodoxconceptualisations of privacy, it is important to recognise that thesetechnologies do not protect other important values and interests. Ifour aim is to build and shape systems encompassing multiple socialgoals, where privacy is just one such goal, then privacy-preservingcomputation techniques have to be considered in relation to thewhole system and the social context. The danger is that the soci-etal problems of data processing technologies — such as the waysthey create distinctions and hierarchies that reinforce power, shapepolitics, or facilitate abuse — are sidelined, redefined, or collapsedunder the banner of ‘privacy’, so that privacy-preserving compu-tation techniques can be positioned as the solution (what Pinchand Bijker term ‘closure by problem redefinition’ [90]). This dan-ger was alluded to in P6[I]’s example of privacy-preserving bordercontrol (where people are still ultimately at the mercy of a pow-erful state), and in P9[D]’s concern about considering the widerpower dynamics in the context of deployment.
Like other technologies which have been touted as potentially rev-olutionary in recent years, the concrete impact of these privacy-preserving computation techniques remains to be seen. New tech-nologies often emerge in unexpected ways, at unpredictable timesfrom niches of computer science: hypertext, Merkle trees, and neu-ral networks were once confined to their respective research sub-fields before they became known more widely as the world wideweb, blockchain, and ‘AI’ (in its latest guise of deep learning). Priorto their take-up in wider society, these specialised areas of researchwere conceived as laying the groundwork for purely technical pieces of invisible infrastructure, whose implications for human-computerinteraction were remote and unclear.However, we believe it is worth HCI researchers studying suchtechnologies prior to their widespread adoption. Whatever techni-cal and institutional forms they take, the journey of privacy-preservingcomputation techniques from the annals of cryptography into pro-duction code will be shaped in substantial part by the approachthey take to a variety of human and societal challenges. Indeed,these challenges directly implicate some fundamental concerns ofHCI, including: multifaceted (re)conceptualisations of the notionof ‘the user’; helping people navigate and manage computationalcomplexity and its consequences; exploring how values like pri-vacy can be reflected in the systems we build; and examining howdifferent political agendas, economic rationale, and user groupsshape and are shaped by those systems. These concerns all cohereand overlap in the emerging space of privacy-preserving computa-tion.This paper has aimed to provide a preliminary and partial out-line of those challenges, laying some of the groundwork for sub-stantial further exploratory and in-depth work to be done. In ad-dition to several recent studies which focused on people’s under-standing of these techniques and their willingness to disclose per-sonal data in the presence of them, we have outlined a broader setof research questions prompted for HCI by PPCs. These include un-derstanding specific application contexts; usability of PPC librariesand tools from a non-specialist developer’s perspective; and un-derstanding the explanation and governance challenges associatedwith these techniques.
ACKNOWLEDGMENTS
This work was funded by EPSRC grant EP/S035362/1 and CallsignInc.
REFERENCES [1] 2020. Microsoft SEAL (release 3.5). https://github.com/Microsoft/SEAL. Mi-crosoft Research, Redmond, WA.[2] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov,Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communica-tions Security . 308–318.[3] Yasemin Acar, Sascha Fahl, and Michelle L Mazurek. 2016. You are not yourdeveloper, either: A research agenda for usable security and privacy researchbeyond end users. In . IEEE, 3–8.[4] Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2015. Pri-vacy and human behavior in the age of information.
Science
Pro-ceedings of the 2019 ACM SIGSAC Conference on Computer and CommunicationsSecurity . 1231–1247.[6] Adi Akavia, Dan Feldman, and Hayim Shaul. 2019. Secure Data Retrieval onthe Cloud: Homomorphic Encryption meets Coresets.
IACR Transactions onCryptographic Hardware and Embedded Systems (2019), 80–106.[7] Sebastian Angel, Hao Chen, Kim Laine, and Srinath Setty. 2018. PIR with com-pressed queries and amortized query processing. In . IEEE, 962–979.[8] Hala Assal and Sonia Chiasson. 2019. ’Think secure from the beginning’ ASurvey with Software Developers. In
Proceedings of the 2019 CHI Conference onHuman Factors in Computing Systems . 1–13.[9] Rebecca Balebako, Jaeyeon Jung, Wei Lu, Lorrie Faith Cranor, and CarolynNguyen. 2013. Little brothers watching you: Raising awareness of data leaks onsmartphones. In
Proceedings of the Symposium on Usable Privacy and Security .ACM, 12.[10] Rebecca Balebako, Abigail Marsh, Jialiu Lin, Jason I Hong, and Lorrie Faith Cra-nor. 2014. The privacy and security behaviors of smartphone app developers.
HI ’21, May 8–13, 2021, Yokohama, Japan (2014).[11] Rebecca Balebako, Florian Schaub, Idris Adjerid, Alessandro Acquisti, and Lor-rie Cranor. 2015. The Impact of Timing on the Salience of Smartphone AppPrivacy Notices. In
Proceedings of the ACM CCS Workshop on Security and Pri-vacy in Smartphones and Mobile Devices . ACM, 63–74.[12] Manuel Barbosa and Pooya Farshim. 2012. Delegatable homomorphic encryp-tion with applications to secure outsourcing of computation. In
Cryptographers’Track at the RSA Conference . Springer, 296–312.[13] Jeffrey Bardzell and Shaowen Bardzell. 2015. Humanistic hci.
Synthesis Lectureson Human-Centered Informatics
8, 4 (2015), 1–185.[14] Lemi Baruh, Ekin Secinti, and Zeynep Cemalcilar. 2017. Online Privacy Con-cerns and Privacy Management: A Meta-Analytical Review.
Journal of Com-munication
67, 1 (2017), 26–53.[15] Alexander Bogner, Beate Littig, and Wolfgang Menz. 2009.
Interviewing experts .Springer.[16] Mark Bovens, Thomas Schillemans, and Robert E Goodin. 2014. Public account-ability.
The Oxford handbook of public accountability
1, 1 (2014), 1–22.[17] Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychol-ogy.
Qualitative research in psychology
3, 2 (2006), 77–101.[18] Ian Brown and Douwe Korff. 2009. Terrorism and the proportionality of inter-net surveillance.
European Journal of Criminology
6, 2 (2009), 119–134.[19] Brooke Bullek, Stephanie Garboski, Darakhshan J Mir, and Evan M Peck. 2017.Towards Understanding Differential Privacy: When Do People Trust Random-ized Response Technique?. In
Proceedings of the 2017 CHI Conference on HumanFactors in Computing Systems . 3833–3837.[20] Rosario Cammarota, Matthias Schunter, Anand Rajan, Fabian Boemer, ÁgnesKiss, Amos Treiber, Christian Weinert, Thomas Schneider, Emmanuel Stapf,Ahmad-Reza Sadeghi, et al. 2020. Trustworthy AI Inference Systems: An In-dustry Research View. arXiv preprint arXiv:2008.04449 (2020).[21] Hao Chen, Zhicong Huang, Kim Laine, and Peter Rindal. 2018. Labeled PSIfrom fully homomorphic encryption with malicious security. In
Proceedings ofthe 2018 ACM SIGSAC Conference on Computer and Communications Security .1223–1237.[22] Jeremy Clark, Paul C Van Oorschot, and Carlisle Adams. 2007. Usability ofanonymous web browsing: an examination of tor interfaces and deployability.In
Proceedings of the 3rd symposium on Usable privacy and security . 41–51.[23] Julie E Cohen. 2012.
Configuring the networked self: Law, code, and the play ofeveryday practice . Yale University Press.[24] Jessica Colnago, Yuanyuan Feng, Tharangini Palanivel, Sarah Pearman, MeganUng, Alessandro Acquisti, Lorrie Faith Cranor, and Norman Sadeh. 2020. In-forming the design of a personalized privacy assistant for the internet of things.In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Sys-tems
Proceedings of the 2002 ACM workshop on Pri-vacy in the Electronic Society . 1–10.[27] Emiliano De Cristofaro and Gene Tsudik. 2010. Practical private set intersec-tion protocols with linear complexity. In
International Conference on FinancialCryptography and Data Security . Springer, 143–159.[28] Paul De Hert and Serge Gutwirth. 2006. Privacy, data protection and law en-forcement. Opacity of the individual and transparency of power.
Privacy andthe criminal law
Information systemsjournal
11, 2 (2001), 127–153.[31] Edsger W Dijkstra. 1982. Selected writings on computing-a personal perspec-tive. Texts and monographs in computer science.
Springer, doi
10 (1982), 978–1.[32] Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UXdesign innovation: Challenges for working with machine learning as a designmaterial. In
Proceedings of the 2017 chi conference on human factors in computingsystems . 278–288.[33] Cynthia Dwork. 2008. Differential privacy: A survey of results. In
Internationalconference on theory and applications of models of computation . Springer, 1–19.[34] Cynthia Dwork and Jing Lei. 2009. Differential privacy and robust statistics. In
Proceedings of the forty-first annual ACM symposium on Theory of computing .371–380.[35] Cynthia Dwork and Adam Smith. 2010. Differential privacy for statistics: Whatwe know and what we want to learn.
Journal of Privacy and Confidentiality
Work-oriented design of computer artifacts . Ph.D. Dissertation.Arbetslivscentrum. [37] Chris Elsden, Arthi Manohar, Jo Briggs, Mike Harding, Chris Speed, and JohnVines. 2018. Making sense of blockchain applications: A typology for HCI. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems .1–14.[38] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. Rappor: Random-ized aggregatable privacy-preserving ordinal response. In
Proceedings of the2014 ACM SIGSAC conference on computer and communications security . 1054–1067.[39] David Evans, Jonathan Katz, Yan Huang, and Lior Malka. 2011. Faster securetwo-party computation using garbled circuits. (2011).[40] Joshua AT Fairfield and Christoph Engel. 2015. Privacy as a public good.
DukeLJ
65 (2015), 385.[41] Daniel Fallman. 2011. The new good: exploring the potential of philosophy oftechnology to contribute to human-computer interaction. In
Proceedings of theSIGCHI conference on human factors in computing systems . 1051–1060.[42] Steven M Furnell, Nathan Clarke, Rodrigo Werlinger, Kirstie Hawkey, and Kon-stantin Beznosov. 2009. An integrated view of human, organizational, and tech-nological challenges of IT security management.
Information Management &Computer Security (2009).[43] Oscar H Gandy Jr. 1993.
The Panoptic Sort: A Political Economy of PersonalInformation. Critical Studies in Communication and in the Cultural Industries.
ERIC.[44] Chong-zhi Gao, Qiong Cheng, Pei He, Willy Susilo, and Jin Li. 2018.Privacy-preservingNaiveBayes classifierssecure against the substitution-then-comparison attack.
Information Sciences
444 (2018), 72–88.[45] C Gentry. 2009. A fully homomorphic encryption scheme. PhD thesis, StanfordUniversity, 2009. (2009).[46] Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, MichaelNaehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks toencrypted data with high throughput and accuracy. In
International Conferenceon Machine Learning . 201–210.[47] Kenneth Goldstein. 2002. Getting in the door: Sampling and completing eliteinterviews.
PS: Political Science and Politics
35, 4 (2002), 669–672.[48] Peter Leo Gorski, Yasemin Acar, Luigi Lo Iacono, and Sascha Fahl. 2020. Listento Developers! A Participatory Design Study on Security Warnings for Cryp-tographic APIs. In
Proceedings of the 2020 CHI Conference on Human Factors inComputing Systems . 1–13.[49] Matthew Green and Matthew Smith. 2015. Developers are users too: designingcrypto and security APIs that busy engineers and sysadmins can use securely.(2015).[50] Seda Gürses and Bettina Berendt. 2010. PETs in the surveillance society: acritical review of the potentials and limitations of the privacy as confidentialityparadigm. In
Data Protection in a Profiled World . Springer, 301–321.[51] Seda Gursesand Joris Van Hoboken. [n.d.]. Privacy after the agile turn. ([n. d.]).[52] Irit Hadar, Tomer Hasson, Oshrat Ayalon, Eran Toch, Michael Birnhack, SofiaSherman, and Arod Balissa. 2018. Privacy by designers: software developers’privacy mindset.
Empirical Software Engineering
23, 1 (2018), 259–289.[53] Rob Hall, Stephen E Fienberg, and Yuval Nardi. 2011. Secure multiple linearregression based on homomorphic encryption.
Journal of Official Statistics
CHI’07 .[55] Carmit Hazay. 2018. Oblivious polynomial evaluation and secure set-intersection from algebraic PRFs.
Journal of Cryptology
31, 2 (2018), 537–586.[56] Carmit Hazay and Yehuda Lindell. 2008. Efficient protocols for set intersectionand pattern matching with security against malicious and covert adversaries.In
Theory of Cryptography Conference . Springer, 155–175.[57] Amir Herzberg. 2009. Why Johnny can’t surf (safely)? Attacks and defenses forweb users. computers & security
28, 1-2 (2009), 63–71.[58] Robert Hoppe. 2009. Scientific advice and public policy: expert advisers’ andpolicymakers’ discourses on boundary work.
Poiesis & Praxis
6, 3-4 (2009), 235–263.[59] Siam Hussain, Baiyu Li, Farinaz Koushanfar, and Rosario Cammarota. 2020.TinyGarble2: Smart, Efficient, and Scalable Yao’s Garble Circuit. In
Proceedingsof the 2020 Workshop on Privacy-Preserving Machine Learning in Practice . 65–67.[60] Sheila Jasanoff and Sang-Hyun Kim. 2009. Containing the atom: Sociotechnicalimaginaries and nuclear power in the United States and South Korea.
Minerva
47, 2 (2009), 119.[61] Somesh Jha, Louis Kruger, and Vitaly Shmatikov. 2008. Towards practical pri-vacy for genomic computation. In . IEEE, 216–230.[62] Noah Johnson, Joseph P Near, and Dawn Song. 2018. Towards practical differ-ential privacy for SQL queries.
Proceedings of the VLDB Endowment
11, 5 (2018),526–539.[63] Robert Jungk and Norbert Müllert. 1987.
Future Workshops: How to create desir-able futures . Inst. for Social Inventions. xploring Design and Governance Challenges in the Development of Privacy-Preserving Computation CHI ’21, May 8–13, 2021, Yokohama, Japan [64] Ruogu Kang, Laura Dabbish, Nathaniel Fruchter, and Sara Kiesler. 2015. “MyData Just Goes Everywhere:” User Mental Models of the Internet and Implica-tions for Privacy and Security. In
Proceedings of Symposium On Usable Privacyand Security . 39–52.[65] Caitlin Kelleher and Michelle Ichinco. 2019. Towards a model of API learning.In . IEEE, 163–168.[66] Florian Kerschbaum. 2012. Outsourced private set intersection using homo-morphic encryption. In
Proceedings of the 7th ACM Symposium on Information,Computer and Communications Security . 85–86.[67] Jennifer King. 2013. “How Come I’m Allowing Strangers To Go Through MyPhone?”—Smartphones and Privacy Expectations. In
Symposium on Usable Pri-vacy and Security (SOUPS) .[68] Blagovesta Kostova, Seda Gürses, and Carmela Troncoso. 2020. Privacy Engi-neering Meets Software Engineering. On the Challenges of Engineering Pri-vacy ByDesign. arXiv preprint arXiv:2007.08613 (2020).[69] Ponnurangam Kumaraguru and Lorrie Faith Cranor. 2005. Privacy indexes: asurvey of Westin’s studies. (2005).[70] Zbigniew Kwecka, William Buchanan, Burkhard Schafer, and Judith Rauhofer.2014. “I am Spartacus”: privacy enhancing technologies, collaborative obfusca-tion and privacy as a public good.
Artificial intelligence and law
22, 2 (2014),113–139.[71] Joseph Lawrance, Christopher Bogart, Margaret Burnett, Rachel Bellamy, KyleRector, and Scott D Fleming. 2010. How programmers debug, revisited: Aninformation foraging theory perspective.
IEEE Transactions on Software Engi-neering
39, 2 (2010), 197–215.[72] Lora Bex Lempert. 2007. Asking questions of the data: Memo writing in thegrounded.
The Sage handbook of grounded theory (2007), 245–264.[73] Pedro Leon, Blase Ur, Richard Shay, Yang Wang, Rebecca Balebako, and LorrieCranor. 2012. Why Johnny can’t opt out: a usability evaluation of tools tolimit online behavioral advertising. In
Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems . ACM, 589–598.[74] Pedro Giovanni Leon, Blase Ur, Yang Wang, Manya Sleeper, Rebecca Balebako,Richard Shay, Lujo Bauer, Mihai Christodorescu, and Lorrie Faith Cranor. 2013.What matters to users?: factors that affect users’ willingness to share informa-tion with online advertisers. In
Proceedings of Symposium on Usable Privacy andSecurity . ACM, 1–7.[75] Jialiu Lin, Shahriyar Amini, Jason I Hong, Norman Sadeh, Janne Lindqvist, andJoy Zhang. 2012. Expectation and purpose: understanding users’ mental modelsof mobile app privacy through crowdsourcing. In
Proceedings of Conference onUbiquitous Computing . ACM, 501–510.[76] Jialiu Lin, Bin Liu, Norman Sadeh, and Jason I Hong. 2014. Modeling users’ mo-bile app privacy preferences: Restoring usability in a sea of permission settings.In
Symposium On Usable Privacy and Security . 199–212.[77] Bin Liu, Mads Schaarup Andersen, Florian Schaub, Hazim Almuhimedi,Shikun Aerin Zhang, Norman Sadeh, Yuvraj Agarwal, and Alessandro Acquisti.2016. Follow My Recommendations: A Personalized Assistant for Mobile AppPermissions. In
Proceedings of the Symposium on Usable Privacy and Security .[78] Dominique Machuletz, Stefan Laube, and Rainer Böhme. 2018. Webcam cover-ing as planned behavior. In
Proceedings of the 2018 CHI Conference on HumanFactors in Computing Systems . 1–13.[79] Donald MacKenzie and Judy Wajcman. 1999.
The social shaping of technology .Open university press.[80] Antonio Marcedone, Zikai Wen, and Elaine Shi. 2015. Secure Dating with Fouror Fewer Cards.
IACR Cryptol. ePrint Arch.
Extended Abstracts of the 2020 CHI Confer-ence on Human Factors in Computing Systems . 1–9.[82] Alfred J Menezes, Jonathan Katz, Paul C Van Oorschot, and Scott A Vanstone.1996.
Handbook of applied cryptography . CRC press.[83] Morgan Meyer. 2010. The rise of the knowledge broker.
Science communication
32, 1 (2010), 118–127.[84] Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalableprivacy-preserving machine learning. In . IEEE, 19–38.[85] Brad A Myers and Jeffrey Stylos. 2016. Improving API usability.
Commun. ACM
59, 6 (2016), 62–69.[86] Nelly EJ Oudshoorn and Trevor Pinch. 2003.
How users matter: The co-construction of users and technologies . MIT press.[87] Antti Oulasvirta and Kasper Hornbæk. 2016. Hci research as problem-solving.In
Proceedings of the 2016 CHI Conference on Human Factors in Computing Sys-tems . 4956–4967.[88] Edward C Page, Bill Jenkins, William Ieuan Jenkins, et al. 2005.
Policy bureau-cracy: Government with a cast of thousands . Oxford University Press on De-mand.[89] David J Phillips. 2004. Privacy policy and PETs: The influence of policy regimeson the development and social implications of privacy enhancing technologies.
New Media & Society
6, 6 (2004), 691–706. [90] Trevor J Pinch and Wiebe E Bijker. 1984. The social construction of facts andartefacts: Or how the sociology of science and the sociology of technologymight benefit each other.
Social studies of science
14, 3 (1984), 399–441.[91] Maarten Roy Prak. 2006.
Craft guilds in the early modern low countries: Work,power and representation . Ashgate Publishing, Ltd.[92] Lucy Qin, Andrei Lapets, Frederick Jansen, Peter Flockhart, Kinan Dak Albab,Ira Globus-Harris, Shannon Roberts, and Mayank Varia. 2019. From usabilityto secure computing and back again. In
Fifteenth Symposium on Usable Privacyand Security ( { SOUPS } .[93] M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin Lauter,and Farinaz Koushanfar. 2019. { XONN } : XNOR-based Oblivious Deep NeuralNetwork Inference. In { USENIX } Security Symposium ( { USENIX } Security19) . 1501–1518.[94] Ronald L Rivest, Len Adleman, Michael L Dertouzos, et al. 1978. On data banksand privacy homomorphisms.
Foundations of secure computation
4, 11 (1978),169–180.[95] Phillip Rogaway. 2015. The Moral Character of Cryptographic Work.
IACRCryptol. ePrint Arch. arXiv preprint arXiv:1811.04017 (2018).[97] Amartya Sanyal, Matt J Kusner, Adria Gascon, and Varun Kanade. 2018.Tapas: Tricks to accelerate (encrypted) prediction as a service. arXiv preprintarXiv:1806.03461 (2018).[98] Florian Schaub, Rebecca Balebako, Adam L Durity, and Lorrie Faith Cranor.2015. A design space for effective privacy notices. In
Proceedings of the Sympo-sium On Usable Privacy and Security . 1–17.[99] Abigail Sellen, Yvonne Rogers, Richard Harper, and Tom Rodden. 2009. Reflect-ing human values in the digital age.
Commun. ACM
52, 3 (2009), 58–66.[100] Steven Shapin. 1998. Placing the view from nowhere: historical and sociolog-ical problems in the location of science.
Transactions of the Institute of BritishGeographers
23, 1 (1998), 5–12.[101] Irina Shklovski, Scott D Mainwaring, Halla Hrund Skúladóttir, and HöskuldurBorgthorsson. 2014. Leakiness and creepiness in app space: Perceptions of pri-vacy and mobile app use. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems . ACM, 2347–2356.[102] The Royal Society. 2019.
Protecting Privacy in Practice: The Current Use, Develop-ment and Limits of Privacy Enhancing Technologies in Data Analysis . TechnicalReport. The Royal Society.[103] Sarah Spiekermann and Lorrie Faith Cranor. 2008. Engineering privacy.
IEEETransactions on software engineering
35, 1 (2008), 67–82.[104] Felix Stalder. 2002. The failure of privacy enhancing technologies (PETs) andthe voiding of privacy.
Sociological Research Online
7, 2 (2002), 25–39.[105] Friedrich Steimann. 2018. Fatal abstraction. In
Proceedings of the 2018 ACM SIG-PLAN International Symposium on New Ideas, New Paradigms, and Reflectionson Programming and Software . 125–130.[106] Donald E Stokes. 2011.
Pasteur’s quadrant: Basic science and technological inno-vation . Brookings Institution Press.[107] Herman T Tavani and James H Moor. 2001. Privacy protection, control of in-formation, and privacy-enhancing technologies.
ACM Sigcas Computers andSociety
31, 1 (2001), 6–11.[108] UN Privacy Preserving Techniques Task Team. 2020.
UN Hand-book on Privacy-Preserving Computation Techniques . TechnicalReport. http://publications.officialstatistics.org/handbooks/privacy-preserving-techniques-handbook/UN%20Handbook%20for%20Privacy-Preserving%20Techniques.pdf.[109] Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and YangWang. 2012. Smart, useful, scary, creepy: perceptions of online behavioral ad-vertising. In
Proceedings of the Eighth Symposium on Usable Privacy and Security .ACM, 4.[110] Max Van Kleek, Ilaria Liccardi, Reuben Binns, Jun Zhao, Daniel J Weitzner, andNigel Shadbolt. 2017. Better the devil you know: Exposing the data sharingpractices of smartphone apps. In
Proceedings of the 2017 CHI Conference on Hu-man Factors in Computing Systems . ACM, 5208–5220.[111] Michael Veale, Reuben Binns, and Jef Ausloos. 2018. When data protection bydesign and data subject rights clash.
International Data Privacy Law
8, 2 (2018),105–123.[112] Xiao Wang, Alex J. Malozemoff, and Jonathan Katz. 2016. EMP-toolkit: EfficientMultiParty computation toolkit. https://github.com/emp-toolkit.[113] Stanley L Warner. 1965. Randomized response: A survey technique for elimi-nating evasive answer bias.
J. Amer. Statist. Assoc.
60, 309 (1965), 63–69.[114] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi,Shi Jin, Tony QS Quek, and H Vincent Poor. 2020. Federated learning withdifferential privacy: Algorithms and performance analysis.
IEEE Transactionson Information Forensics and Security (2020).[115] Alma Whitten and J Doug Tygar. 1999. Why Johnny Can’t Encrypt: A UsabilityEvaluation of PGP 5.0.. In
USENIX Security Symposium , Vol. 348. 169–184.
HI ’21, May 8–13, 2021, Yokohama, Japan [116] Kathleen Broome Williams. 2012.
Grace Hopper: Admiral of the cyber sea . NavalInstitute Press.[117] Aiping Xiong, Tianhao Wang, Ninghui Li, and Somesh Jha. 2020. TowardsEffective Differential Privacy Communication for Users’ Data Sharing Decisionand Comprehension. arXiv preprint arXiv:2003.13922 (2020).[118] Andrew C Yao. 1982. Protocols for secure computations. In . IEEE, 160–164. [119] Xun Yi, Mohammed Golam Kaosar, Russell Paulet, and Elisa Bertino. 2012.Single-database private information retrieval from fully homomorphic encryp-tion.
IEEE Transactions on Knowledge and Data Engineering
25, 5 (2012), 1125–1134.[120] Minhaz Zibran. 2008. What makes APIs difficult to use.