Machine Learning Uncertainty as a Design Material: A Post-Phenomenological Inquiry
Jesse Josua Benjamin, Arne Berger, Nick Merrill, James Pierce
MMachine Learning Uncertainty as a Design Material:A Post-Phenomenological Inquiry
Jesse Josua Benjamin [email protected] of PhilosophyUniversity of TwenteEnschede, Netherlands
Arne Berger [email protected] Science and LanguagesAnhalt University of Applied SciencesKoethen, Germany
Nick Merrill [email protected] for Long-Term CybersecurityUniversity of California, BerkeleyBerkeley, United States
James Pierce [email protected] of Art + Art History + DesignUniversity of WashingtonSeattle, United States
ABSTRACT
Design research is important for understanding and interrogat-ing how emerging technologies shape human experience. How-ever, design research with Machine Learning (ML) is relativelyunderdeveloped. Crucially, designers have not found a grasp on MLuncertainty as a design opportunity rather than an obstacle. Thetechnical literature points to data and model uncertainties as twomain properties of ML. Through post-phenomenology, we posi-tion uncertainty as one defining material attribute of ML processeswhich mediate human experience. To understand ML uncertaintyas a design material, we investigate four design research case stud-ies involving ML. We derive three provocative concepts: thinglyuncertainty : ML-driven artefacts have uncertain, variable relationsto their environments; pattern leakage : ML uncertainty can leadto patterns shaping the world they are meant to represent; and futures creep : ML technologies texture human relations to time withuncertainty. Finally, we outline design research trajectories andsketch a post-phenomenological approach to human-ML relations.
CCS CONCEPTS • Human-centered computing → HCI theory, concepts andmodels . KEYWORDS post-phenomenology, machine learning, design research, thinglyuncertainty, horizonal relations
ACM Reference Format:
Jesse Josua Benjamin, Arne Berger, Nick Merrill, and James Pierce. 2021. Ma-chine Learning Uncertainty as a Design Material: A Post-PhenomenologicalInquiry. In
CHI Conference on Human Factors in Computing Systems (CHI’21), May 8–13, 2021, Yokohama, Japan.
ACM, New York, NY, USA, 14 pages.https://doi.org/10.1145/3411764.3445481
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).
CHI ’21, May 8–13, 2021, Yokohama, Japan © 2021 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-8096-6/21/05.https://doi.org/10.1145/3411764.3445481
Just as prior advances in computing technology have led to designresearchers approaching algorithms, software interfaces, sensors,and actuators as design materials [44, 60], similar developments arehappening with Machine Learning (ML) technologies (cf. [10, 27,41, 80]). Recent advances in this regard have resulted in guidelinesfrom computing corporations for curating ML results, improvinguser experience design and Explainable Artificial Intelligence (XAI)(e.g., [2, 19, 43, 81]). In light of the increasing ubiquity and technicalopacity of ML, design research methodologies such as Research-through-Design, Speculative Design or Design Fiction are urgentlyneeded in this space to develop better tools grounded in a rich andsocially-situated understanding of how ML shapes everyday life.We argue that design research is well-posed to describe, exploreand reflect on how the “statistical intelligence” [10] of ML decision-making bleeds into the intimate human experience of lifeworlds;and to productively engage with emerging personal, societal andethical issues. At the same time, we observe that design researchstruggles to engage ML as a material for design, as the probabilis-tic inference of models from data patterns withdraws from beingpresent-at-hand. A major focus of prior work is the technical opacityof ML. However, just as ML-driven systems are difficult to interpretand exhibit emergent and oftentimes unpredictable behavior [8], theoutputs they generate are inherently characterized by uncertaintyfrom data noise and model variance [35].ML uncertainty is a problem for HCI design research, becausethe field has not yet framed it as a design material to assess, design,and reflect novel applications, objects and services that build on ML.On the contrary, prior work, for example in the area of XAI, is oftenconcerned with using design methods to explain, rather than utilize,ML uncertainty. We propose that approaching ML uncertainty as aspecific material property of ML can help design research considernovel spaces for design intervention and interaction.In this paper, we investigate ML uncertainty from a post-pheno-menological perspective to develop a conceptual vocabulary forengaging ML uncertainty in designerly ways. The principal argu-ment of post-phenomenology is that technologies actively mediate human relations to the world [33]: technologies are neither com-pletely deterministic (technological determinism) but neither are a r X i v : . [ c s . H C ] J a n HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al. they neutral tools (technological instrumentalism). Instead, human-technology relations are co-constitutive, with artefacts shapinghow humans as specific subjectivities relate to the world [67]. Inthis view, design is doing philosophy “by other means” [69], anddesign materials are those involved in shaping technological me-diation. We follow Hauser et al.’s framing of Research-through-Design (RtD) as post-phenomenological practice [25], and expandthis view to further design research methodologies and the specifictopic of ML. We thereby also continue prior work from the HCIcommunity on using and extending the post-phenomenologicalframework (cf. [26, 58, 72]). Specifically, we investigate four designresearch case studies using a post-phenomenological lens, and de-velop provocative concepts for ML uncertainty similar to “strongconcepts” [32] or robust “annotations” [14, 25] in HCI. In this, wetake a modest step forward in making ML uncertainty tangible as adesign material for HCI design researchers. We argue that uncer-tainty is the material expression (cf. [60]) of ML decision-making .While there are reasonable engineering incentives to minimize un-certainty in many use cases, uncertainty constitutes a fundamentalattribute of any ML-driven system. Uncertainty offers a represen-tation of the ‘fault lines’ of ML decision-making, not only as anegative attribute of solutions but as simply part-and-parcel of MLtechnologies. As such, designerly making with uncertainty offersan opportunity to design artefacts and scenarios that attribute ormore fully exploit the characteristics of ML decision-making. Ourcontributions consist of provocative, conceptual shorthands for MLuncertainty. We encourage designers to not see ML uncertaintyas “to be explained away,” but rather as generative of particularrelations that can be designed for. At the same time, we also showhow emerging ML applications are not readily accounted for withprior human-technology relation concepts.We provide three provocative concepts that build on ML uncer-tainty as a source for future research in HCI, design research andphilosophy of technology communities. As a general concept, wepropose thingly uncertainty to capture the capacity of ML-drivenartefacts to be uncertain about the world, and thereby generatingand adapting to a wide continuum of relations to other things, theirdatafied environment and people. We furthermore distinguish twospecific concepts. First, we propose pattern leakage for describinghow ML models become generative of patterns which enter and al-ter the everyday. Pattern leakage describes how ML models alter theworld they seek to represent. Second, we propose futures creep todenote the often subtle transformations of the present by ML predic-tions; changing human relations to time by injecting probabilisticevents such as climate change predictions into the direct perceptionof the present. Based on our derived concepts, we propose howdesign research can engage with ML uncertainty, and furthermoresuggest horizonal relations as a post-phenomenological researchtrajectory into the specifics of how ML capacities and human ex-perience co-extend and overlap. Among the contributions of thispaper, we (1) construct a post-phenomenological lens on ML basedon related work; (2) analyze four case studies from different designresearch methodologies through this lens to discern ML uncertaintyas a design material; (3) present thingly uncertainty , pattern leakage and futures creep as provocative concepts for future work; and (4)propose horizonal relations as a distinct human-technology relation,and lay out research trajectories for investigation. In the following section we (1) provide a brief introduction to MLuncertainty; (2) frame how design research can pragmatically andcritically engage with emerging technologies; (3) argue that post-phenomenology is a promising lens to make ML uncertainty gras-pable in design research.
Many of the diverse algorithmic techniques common in ML todaystem from cybernetics. McCulloch and Pitt’s notion of neuronalactivity [49] and Rosenblatt’s
Perceptron as a probabilistic model forlearning and remembering information about the environment [63]are in direct connection to today’s advances in deep learning, whileWiener’s concept of negative feedback [74] can still be seen as thegeneral principle which make ostensibly novel technologies tick.And like cybernetics, today’s ML technologies are reliant on prob-abilistic techniques as a computational means for the “taming ofchance” [20]. The deployment of increasingly powerful probabilis-tic techniques (e.g., expectation-maximization, gradient descent,backpropagation) capable of adapting models to large datasets inreal-world settings has also come at the cost of opacity, and be-come generative of types of uncertainty originating from within technological deployments themselves.In general, ML algorithms operate according to the principleof “insight through opacity” [50], using specific probabilistic tech-niques to infer a model that describes statistical patterns in datasets[6], which approximates some assumed real-world functional rela-tionship [18]. The insight, i.e. a model that can detect significantpatterns in relevant data (e.g., does this image show tumors?), de-pends on opacity: as each variable such as a pixel is computed as avector in relation to all other variables, the combined dimensional-ity of vectors exceeds those that humans can intuit. In short, theoften “unreasonable effectiveness” [21] of current ML implementa-tions comes at the cost of understanding how exactly insights werearrived at. From within ML research, the challenge of opacity isone of the most urgent topics of research, showing in fields such asXAI or ML interpretability research. A common approach for MLresearch is to deploy algorithmic methods, so-called interpretabilitytechniques [45, 53], to extract information from ML pipelines inorder to explain outputs via textual or visual explanations. Whilethis research area is fundamentally oriented at experts, its focus onactual ML technologies offers up a catalogue of properties poten-tially of interest to design research. Given ML’s epistemological aswell as technical origins in cybernetics, we are specifically inter-ested in how uncertainty is dealt with in light of the contemporaryinsight-through-opacity approach.In ML research, the aim of dealing with uncertainty takes on adistinctly material grounding: given the reliance on probabilistictechniques, uncertainty is not only a human disposition but partand parcel of ML technologies. ML researchers frequently distin-guish between two types of uncertainty which are related to data(both training and input) and models respectively [12, 35, 37]. Theformer, “aleatoric” uncertainty, can be framed as ‘noise’: incomingsignals in training or real-world deployments are inevitably im-pure and may affect the performance of ML algorithms. The latter, achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan “epistemic” uncertainty, refers to the complex questions surround-ing the ‘fit’ of a generated ML model. Inferred models embody,firstly, only one way of describing patterns from a given dataset,and given the insight-through-opacity approach the relationshipto unknown models is uncertain. Secondly, an inferred model mayalso generate additional uncertainty when deployed “out-of-data”,i.e. in settings different to training environments. Data and modeluncertainty frequently feature in ML interpretability research. It isimportant to note that both are computable depending on the givenML deployment. Hohman et al., for example, deploy a technologyprobe for ML experts featuring various data visualizations in oneinterface, which includes a “regions-of-error” technique showingthe model uncertainty of predictions [30]. Similarly, Kinkeldey etal. use a landscape metaphor in a cluster visualization, indicatingthrough a grey-scale topography how certain the clustering modelis about the membership of each individual point by their locationin “peaks or slopes” [36]. Concerning data uncertainty, Kendall andGal note that in image segmentation using deep learning, noiseaffects the boundaries surrounding objects [35]. Similarly, Kwon etal. note that data uncertainty in brain lesion detection with neuralnetworks manifests around affected brain regions [39].On a general level, we therefore consider data uncertainty tomanifest with the objects of ML decision-making (e.g., data as im-ages, strings, vectors), whereas model uncertainty concerns the mode of ML decision-making (e.g., clustering, classifying, predict-ing). While the technical fields take an understandably solutioniststance on ML uncertainty by attempting to either minimize or ex-plain it, we argue that the notion of computational uncertaintyas a part-and-parcel property of ML promises opening actual MLtechnologies to designerly research.
Uncertainty, understood in an everyday sense as ambiguity orchance, is a well-known resource in design research (cf. [15]). Asan alternative to quantitative studies, design research focuses onunearthing, exploring and understanding individual and situatedencounters of people with technology that are often based on un-certainty and indeterminacy. Particularly ‘third-wave HCI’ [24]methodologies such as Research-through-Design, Design Fictionor Speculative Design have engaged this trajectory, foregroundinghow technological artefacts are not merely solutions to discreteproblems but rather embody open-ended, contextually dependentquestions. Yet, design research has predominantly focused on eitherusing ML for design, or using design for
ML. With regards to theformer, for example, Yang et al. have discussed how ML is engagedby design practitioners and researchers [80–82] to improve userexperience through adaptation or personalization. They found thatwhile practitioners and researchers are enthusiastic about usingML, design research so far lacks distinct methods that engage MLas a design material. Similarly, Dove et al., conclude that integrativeprototyping methods reflecting both “ML statistical intelligenceand human common sense intelligence” [10] are missing in the field.However, in contrast to this research focus on ML for design, we ar-gue that more fundamental research is needed into how, and to whatend, ML technologies can be a design material in their own right. Similarly, in HCI approaches to XAI, researchers envision designresearch methods for improving a given system’s explainability; bye.g. developing explainability scenarios [3, 76] or conceptualizingcontextually sensitive questions [43, 73] for various stakeholders.We see a similar, if inverted, limitation in this use of design for
ML:in XAI, ML technologies tend to become the target (for e.g. design-ing explanations), not the material, of design research methods.We therefore next consider critical design research projects that,while not always directly addressing ML technologies, deal withuncertainties of complex design materials.For example, Merrill et al. found that lay people believe differenttypes of biosensors can reveal much more, or much less, than theyactually can [52]. Others have explored how people speculate onsmart things for the home, based on the fluidity and diversity ofthe values they attribute to their individual homes. This unraveledthe idiosyncrasies and situatedness of potential future smart things,and highlights that these are bound to individual experiences withuncertainty [4, 56]. Pierce’s design-led inquiry into smart camerasystems [58] investigates the ways in which smart camera systemsopaquely embody specific relations to the world–that is, functionswhich may be concealed due to undesired effects (e.g., distrust, fail-ure). Redström and Wiltse further interrogate how user interactionsare tied to infrastructural functions, and outline how “surface-levelsimplicity” of interactions such as pressing play in Spotify belie“dynamic, sophisticated, and hidden backend complexity” [61].This brief overview of critical design research projects showsthat in general, situated yet concealed design materials are nothingnew, and opacity and its often uncertain effects are a recurringtheme across application areas. However, we observe that few criti-cal design research projects deal with specific rather than genericML technologies. We hypothesize that design research currentlylacks a conceptual grasp on material properties that characterize MLtechnologies’ inference of models from data and decision-making(e.g., prediction, classification). Due to their relationship with input,inference and output of ML technologies, we propose that com-putational model and data uncertainty from the technical fieldsdiscussed above are promising candidates. That is, rather than ahuman-centered notion of uncertainty, we propose that the tech-nical framing of uncertainty promises a stronger foothold on theML design space. However, it is unclear how ML uncertainty canbe framed specifically for design research. For example, Hemmenttet al. call for artistic, designerly practices of revealing the “distor-tions in the ways in which algorithms make sense of the world”[29]; yet do not outline how this may relate to existing design re-search methodologies as well as actual ML technologies. In thefollowing, we discuss how the philosophical framework of post-phenomenology offers a starting point to expand on this gap.
Post-phenomenology is an empirical-analytical framework in phi-losophy of technology. In this view, technological artefacts shape hu-man perception and action by mediating the world in specific ways[33]. The framework has become particularly influential in HCIdesign research, as it foregrounds the responsibilities and ethico-political stakes of designing technological artefacts—-surfacing how
HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al. technologies co-constitute our relations to specific ‘slices’ of theworld, and how this relation makes us who we are [13, 25, 69].Below, we outline established concepts of post-phenomenologyin the form of their basic relational schemata, which allow empirical-analytical research to probe their objects of study for the roles that human , technology and world play.The schemata use established notations for simple connections between entities ( - ); interpretation of one by the other ( → ); beingexperienced together ( () ); being in the background ( / ) of anotherentity; or being already thematized (i.e., meaningfully focused [28])in some way ([]) before being experienced. We then argue thatpost-phenomenology is a well-suited framework to describe, ana-lyze, and interpret ML as a technology with active yet concealedrelations to the world. Human ways of perceiving and act-ing in the world are shaped by technological artefacts, co-constitu-ting who relates to what/whom in which way. Depending on the technology (e.g., an ultrasound scanner), ways of perceiving phe-nomena in the world are shaped (e.g., the fetus-as-patient, thewomb-as-potentially-dangerous-enclosure), and actions are invitedor inhibited (e.g., decisions on fetal care), for a specific human subjec-tivity (e.g., the non-pregnant parent-as-caretaker) [67]. The generalschema for technological mediation is:
Human – Technology – World
Post-phenomenology studiestechnological mediation through human-technology relations (Ta-ble 1), the structures of the empirical settings in which humans andtechnologies encounter each other, and how such encounters shapehow humans relate to the world [62].Ihde initially proposed four types of human-technology relations[33]. In embodiment relations , technologies become an insepara-ble part of human bodily-perceptual experience. Wearing glassesshapes our experience of the world, not of the glasses themselves.In hermeneutic relations , technologies make the world legible in aspecific way. Reading a map mediates the world as a grid; a ther-mometer translates heat phenomena onto a legible scale. More overt,in alterity relations technologies are seen as a quasi-Other. For ex-ample, an ATM, toy or robot are interacted with as if they havehuman-like intentions. In background relations , technologies mergewith the background of experience, yet “texture” (i.e., generate anatmosphere for) that experience. Central heating or thermostatsare absent presences in the experience of our home, but are rarelyinteracted with directly. Verbeek further expanded these relationsto reflect the more subtle and also radical mediation by emerg-ing technologies. In immersion relations , for instance, technologiessuch as augmented reality glasses or ambient intelligences mergewith the environment, shaping how social relations can be enactedwithin them [70]. More radically, in cyborg relations , technologiessuch as microchips or pacemakers merge with the body, becomingindistinguishable in direct experience [68]. And lastly, in compositerelations , technologies such as extremely long-exposure photog-raphy or computational imaging make things experienceable thathave no direct correlation to ordinary human modes of perceivingspace and time. Instead, composite relations mediate a “reality thatcan only be experienced by technologies” [68].
Relation Schema ExamplesEmbodiment (I-Technology)World
Glasses, CaneHermeneutic I → (Technology - World) Thermometers, MapsAlterity I → Technology (- World)
ATMs, RobotsBackground
I (- Technology/World)
Heating, ThermostatImmersion I ↔ Technology/World
Virtual/Augmented RealityCyborg (I/Technology) ↔ World
Implants, PacemakersComposite I → (Technology → World)
Computational Imaging
Table 1: Human-technology relations defined by Ihde andVerbeek, with associated schemata and examples.
In all the above human-technology relations, post-phenomenology attributes intentionalityto technological artefacts, a material ‘directedness’ that these arte-facts exhibit in relating to the world [67]: through intentionality,such as the thermometer’s combination of quicksilver and a scale,specific aspects of the world become legible in a specific way (e.g.,‘reading’ temperature). ML processes of inferring models from pat-terns found in data arguably strongly evidence this characteristic.At the same time, ML processes occur outside the phenomenological“horizon” of experience [33] in everyday life: we engage newsfeed-interfaces as technological artefacts, not the ML sorting algorithms.This leads us to two considerations for grasping human-ML rela-tions post-phenomenologically. First, ML technologies shape ourperception of the world via artefacts (e.g. devices and interfaces)which display, or are composed according to, their outputs. Second,outputs are not pre-configured, but depend on how data patternsand the parameters of the specific ML algorithm converge in amodel. Accordingly, though the “model-world relations” [22] ofML algorithm and data affect how we experience the world, theydo so in a way that is not directly empirically present. Given theestablished post-phenomenological concepts of texturing from thebackground of experience, thematization and the schema of compos-ite relations , we can sketch a preliminary schema for technologicalmediation in human-ML relations as follows:
Human - Technology / (Model → [World]) - World This schema represents how an ML model, though active inthe background of technological artefacts ( / ) that we experiencewithin our phenomenological horizon, nevertheless textures thatexperience through its interpretive ( → ), data-driven ( [] ) relationswith the world. It can thus serve similar to the general schema ofhuman-technology-world relations shown above in describing thetechnological mediation of ML. However, this schema alone doesnot yet provide concrete guidance on how precisely design researchmethodologies may gain a firmer grasp on ML through uncertainty,or whether there are particular human-technology relations beyondthose in Table 1. So far, all it does is indicate that design researchersmay probe or actively pursue ML outputs for higher variance (e.g.,more or less uncertain), but not whether there may be ML-specificphenomena which should be paid special attention to. This formsour rationale for conducting an investigation of design researchprojects that engage ML technologies, as such an investigation willpoint to specific phenomena which then also feed back into ourpreliminary schema. achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan Case Study Methodology ML Technology Context Design OutputPierce’s
Shifting Lines of Creepiness [58] Research-Through-Design Image Recognition Corporate / Home Artefact ScenariosWong et al.’s
When BCIs have APIs [79] Design Fiction Classification Corporate / Labor Infrastructure ScenariosWakkary et al.’s
Morse Things [72] Material Speculation Reinforcement Learning Home Counterfactual ArtefactBiggs and Desjardins’
Highwater Pants [5] Speculative Design Linear Regression Climate Futures Speculative Artefact
Table 2: An overview of the selected design research projects as case studies for our analyses.
In this section we describe our rationale for selecting our case stud-ies, and how we use them to frame ML uncertainty as a design mate-rial through post-phenomenological analyses. Post-phenomenologyconsiders design as a practice of shaping technological mediation,as a way of doing philosophy “by other means” [69]. We buildour approach on Hauser et al.’s work on the relationship betweenResearch-through-Design in HCI and post-phenomenology [25],outlining the former as an experimental variant of the “interpretiveempiricism” of the latter.The hypothesis for our approach is twofold. Firstly, design re-search unfolds and shapes specific relations between humans andthe world by designing technological artefacts. Secondly, as philoso-phy-in-practice, design research may hold latent propositions onhow to think ML uncertainty in a post-phenomenological, design-erly way. Therefore, the goal of the remainder of this paper is touse post-phenomenology to explicate and conceptualize the role ofML uncertainty in specific design research projects involving MLtechnologies.
Our selected case studies are design research projects from diversemethodological strands and application domains, to reflect bothresearch and real-world concerns relating to ML technologies. Wefirst gathered potential case studies from the corpus of CHI andDIS from 2015 to 2020. For the final selection, all authors met anddiscussed candidates (which also included e.g. [31, 40, 51]). We wereespecially interested in investigating a selection of case studieswith a methodological, technological, contextual and designerlydiversity. The presented case studies (cf. Table 2) were selected tocover a wide range of human-technology relations with differentML applications, while at the same time representing the diversityof HCI design research methodologies and contexts.This diversity covers a wide spectrum of established human-technology relations (cf. Table 1), offering an empirical-analyticalgrounding to our own analysis (cf. [25]). Specifically, we selectedPierce’s design-led inquiry into smart home security cameras toreflect issues of leaking surveillance in the home [58], Wong etal.’s design fiction on Brain-Computer Interfaces (BCI) to reflectinfrastructural relations [77], Wakkary et al.’s material speculationinto more-than-human design of artefacts [72]; and Biggs and Des-jardins’ speculative design of artefacts for relating to climate changepredictions [5]. Our case study selection enables us to consider how,within the case studies, ML uncertainty becomes an explicit or im-plicit facet in the process of the designerly shaping of technologicalmediation, and thereby how people relate to the world.
Analyses were led by the first author in an iterative process, sub-sequent to the selection of case studies. Our high-level analyticprocess begins with human-technology relations articulated in thework of Ihde and Verbeek, and cited and extended within HCI bye.g. [26, 58, 72]. We focus on the human-technology relations giventhrough each case’s design output (e.g., artefact, scenario) and con-text (e.g., home), and then expand to the particular ML technologyemployed in the case studies to reconsider how the former are af-fected. We do so through our initial theoretical schema for human-ML relations (cf. subsection 2.3). In our analyses, we search fordesignerly “intermediate-level knowledge” [32] that ties in human-technology and model-world relations; which also allows us toinvestigate in how far established post-phenomenological notionscan be advanced. By analyzing the entanglement of the specific MLtechnology (e.g., goal-driven reinforcement learning) with the givenhuman-technology relation (e.g., embodiment relation), we discernphenomena unaddressed within the design research projects them-selves. We then generalize these phenomena under provocativeconcepts, which we propose be used in design processes to engageML uncertainty as a design material.
In the following, we investigate four case studies from distinctdesign research methodologies and unfold phenomena and researchquestions related to ML uncertainty as a design material.
Shifting Lines of Creepiness
Pierce undertakes a Research-through-Design(RtD) inquiry into smart camera systems, focusing on notions of‘creepiness’ so as to interrogate how design artefacts navigatecreepiness and acceptability [58] of smart cameras in the home.Pierce investigates through RtD how smart home cameras mayopaquely or involuntarily “leak data” into system ecologies; housemore directly opaque “hole-and-corner” applications that exploitdata; and lay the groundwork for future smart services by actingas a “foot-in-the-door.” Pierce proposes speculative design scenar-ios which transpose the described phenomena into future appli-cation domains. The artefacts, smart home security cameras, arepredominantly driven by ML image recognition algorithms, suchas Convolutional Neural Nets or deep learning variants (e.g., [54]).Pierce’s design considerations construct specific human-technologyrelations, which we now probe for the (implicit or explicit) presenceof ML uncertainty.
HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al.
We focus specifically on the notion of ‘data leakage’in Pierce’s framing, as it is most distinctly tied to the physicalartefact of the camera and the ML algorithm. Depending on howthe camera is oriented, recognized objects and people can becomeprocessed as data unbeknownst to or without the explicit intentionof the owner of the smart camera (e.g., the neighbor’s visitor). Thisimplies one of Ihde’s established human-technology relations, abackground relation:
I ( - Camera / Home)
Background Relation
In this human-technology relation, the home is shaped as a dis-tinct zone-of-observation, and humans or animals become detections-in-waiting. However, Pierce is explicitly concerned with the agentialcapacities of smart cameras, i.e. the capacity of ML recognition mod-els to extract patterns from incoming data. The smart camera is lesspassive than heating appliances or a thermostat, it has a dynamicrelation to the surrounding world in its interpretation of incomingdata. Therefore, we add this capacity as a distinctly composite, butnon-experiential side to the background relation:
I ( - Camera / (Recognition → World) / Home)
Composite Background Relation
Yet, this does not quite cover what an ML algorithm within asmart camera does . Crucially, it is inferred from a particular relation-ship between patterns of pixels, e.g. the presence of patterns x and y in an input image indicates the detection of person-is-entering-a-camera-frame (in non-conceptual terms, the model does not ‘know’this). As these relationships between patterns are learned from data,the relation can be refined by saying that the ‘world’ which thesmart camera relates to is already technologically thematized to acertain extent: there’s a propensity to recognize specific things inthe world, which we indicate with square brackets ( [] ). I ( - Camera / (Recognition → [World]) / Home) Composite Background Relation
This notation now brings the specifics of ML in the smart camerahome security system to the fore: it is not only the camera’s physicallurking which textures people’s experience of the home. Like athermostat, people are not constantly aware of the camera, andalso like a thermostat, sometimes people are, through notificationsor alarms. Yet, people do not have explicit access to the layer oftexturing constituted by the direction (i.e., intentionality) of themodel towards latent recognitions in every frame. Depending on thetrained model, some patterns are more probable to be recognized:it is not only data that leaks from the outside-in, as Pierce’s dataleakage covers, but furthermore patterns that leak from the inside-out (cf. Figure 1). That means, the ML model active within the smartcamera has been trained to recognize faces of people within its fieldof view. It is not capable to distinguish the neighbor’s porch fromthe porch of its owner. As such, there is a likelihood that a personon the neighbors porch or even a portrait on a delivery truck decalare categorized as a person, depending on how the ML model isinferred.This is where we find a first manifestation of ML uncertainty asa design material: patterns that leak are also always probabilistic,e.g., there’s an inherent variance in the detection due to modeluncertainty. Kendall and Gal note that in their deep learning image registerapplicationupdatepattern apply pattern
Convolutional Neural Network
Figure 1: ML uncertainty may not only lead to data leakingfrom the outside-in, but patterns leaking from the inside-out .Camera Illustration © Pierce [58] segmentation use case, model uncertainty leads to footpaths be-coming part of roads [35]. This kind of model uncertainty suggeststhat ML recognition through smart cameras may generate phenom-ena, rather than merely register them: when ML-driven artefactsrecognize or process observations, model uncertainties can lead topatterns being projected into the world, which in turn may solidifythe propensity for that pattern to be recognized. Or, falsely associ-ated patterns may lead to new entities becoming significant. WhilePierce proposes speculative artefacts to counteract data leakage, weconsider the phenomena of pattern leakage as a prompt for designresearch to actively, productively, and also critically engage with.Possible research questions for design research into smart camerapattern leakage may be the following: what novel hybrid entities,such as detections compressing human bodies and suburban sur-roundings, could become ‘self-evident’ through ML feedback loops?What unforeseen consequences could such uncertain entities have?In what way may they have an effect on end-users? If pattern leak-age leads to smart camera patterns that leak from the inside outinto the world in a way either nonsensical or odd to humans, whattype of domains, products, services could embrace such an uncer-tain mode of access to the world? Future design research couldtherefore specifically design for pattern leakage as an exemplaryphenomenon of the “ontological surprises” [41] of ML, for exam-ple, by engaging in “ludic design” [16] of artefacts interpreting theworld by allowing for or exaggerating model uncertainty.
When BCIs have APIs
In their design fiction on brain-computer inter-face (BCI) applications, Wong et al. speculate about the creationof an API for a Google service which lets developers tap into thereading of P300 occurrences in brainwaves. These P300 readingsindicate recognition by measuring spikes in brainwave activity,which can be used for e.g. character recognition for input devices.Wong et al. employ design fiction to surface new forms of labor andasymmetries such technologies may engender [79]. Additionally,the researchers also outline distinct human-technology relations achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan which allow us to probe the surfacing of ML uncertainty. Wong etal. design their fictional API on the basis that the algorithm usedto infer the P300 signal was trained on “lab-based stimuli from acontrolled environment;” which is representative of various real-world applications and the tensions between training and generalapplication. While the researchers did not specify the algorithmthey had in mind, from state of the art research in the field of ML-based P300 recognition (e.g., [59]) we can reasonably assume thata Support Vector Machine (SVM) was involved. SVMs follow theinsight-through-opacity approach of ML, as they attempt to form‘hyperplanes’ in high-dimensional spaces in order to classify (i.e.,separate) data [7]. receive signal apply patterncorrelate pattern
Support Vector Machine
Figure 2: Data uncertainty can lead to pattern leakage ofP300-labelled phenomena into everyday experience, affect-ing human-technology relations beyond the original do-main. Photograph © Wong et al. [79]
It is within an application based on the P300 APIwhere we encounter ML uncertainty as a potential design mate-rial. In this scenario, Wong et al. present the following situation:Crowdworkers are employed with Human Intelligence Tasks (HITs)of content moderation that is using BCIs. This task does not ne-cessitate direct human action from the crowdworkers, but buildson a technical process which registers crowdworker’s brainwavesin relation to potentially harmful images presented on the crowd-working platform (e.g., Amazon’s Mechanical Turk). In the scenario,crowdworkers voice insecurity over these HITs, debating possibleP300 recognition errors and doubts over the fitness of their cog-nition to the task. Nevertheless, in order to successfully completetheir HITs, workers employ drastic measures such as holding opentheir eyelids so as to moderate as much content as possible. At thispoint, we already encounter a complex, two-fold human-technologyrelation: Crowdworkers are firstly wearing their BCI, rely on it pas-sively to work correctly and, secondly, looking at the HIT interfacewhich presents them with content moderation tasks. As such, aninitial human-technology relation in this scenario may be depictedas follows: (I - BCI) → (HITInterface - World) Embodiment Hermeneutic Relation
However, this schema does not yet show how the SVM employedto detect P300 signals is involved. As the P300 signal model inter-prets ( → ) incoming brainwave-data about the world ( [] ), we mayschematize this process as follows: Signal → [World] We may therefore extend the above scenario that integratesan embodiment relation (crowdworkers are wearing BCIs) and ahermeneutic relation (crowdworkers are observing images for con-tent moderation on an interface) with a further composite relation: (I - BCI / (Signal → [World])) → (HITInterface - World) Composite Embodiment Hermeneutic Relation
Here, ML uncertainty surfaces in Wong et al.’s research as animplicit design material. The model for P300 stems from the SVMalgorithm constructing a hyperplane to separate other signals fromthe P300 pattern. The latter was, as Wong et al. present, learnedfrom lab-stimuli in controlled environments. Hence, the data un-certainty of the P300 pattern recognition pipeline provides anotherdimension to the above concept of pattern leakage . Through datauncertainty, the SVM hyperplane may be inclusive of non-P300events, and the P300 pattern (i.e., not the ‘actual’ P300 occurrence,but it’s model) can thereby ‘leak’. When the trained model is trans-posed into other domains (e.g., content moderation), overlap be-tween real-world stimuli in phenomenological experience with datauncertainty, then, can lead to P300 patterns leaking into human-technology relations (cf. Figure 2). For example, we can imagineautomated filtering of social media newsfeeds based on BCI-drivencontent moderation. This would make the world legible in partic-ular ways building on the emergence of leaked patterns. Thesecould moderate content based on how a particular BCI algorithminterprets users cognitive functions. Pattern leakage thus describesa phenomenon in Wong et al.’s project, which the researchers haveconsidered as an error generative of labor exploitation. In the spiritof Wong et al.’s research, of using design fiction to unfold the “ba-nality of more probable outcomes” [79] of emerging technologyapplications, ML uncertainty beyond the notion of error in the formof pattern leakage can be engaged explicitly as a design fictionmaterial. Design fiction can probe for a range of ‘real-world’ eventsthat due to pattern leakage become associated with a particular ser-vice or artefact in unplanned, yet generative processes. For example,P300 pattern leakage into automated social media filtering can, dueto emergent networked media effects, become generative of eventssuch as novel trends, memes, or socio-material practices that are ir-reducible to human decision-making, but rather are distinct effectsof ML data uncertainty in the interpretation of human cognition bya specific algorithm. Design fiction research can therefore use thisconcept to attend to non-linear, unplanned yet generative effectsof future systems.
Morse Things
Wakkary et al. have developed ‘Morse Things’as a set of computationally enhanced bowls that can function aseveryday household objects while simultaneously communicatingamong themselves in a human-excluding fashion [72]. Followingthe material speculation methodology, Morse Things are actualartefacts that are nonetheless counterfactual to expected ways ofinteraction. They yield a ‘possible world’ in which such artefacts
HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al. can exist in their own right [71]. Sets of three (from small to large)were distributed among households by the researchers, and afterliving with the artefacts for six weeks, a workshop was held inwhich participants shared stories and constructed scenarios forfuture technologies. Wakkary et al. thereby sought to probe howthing-centered design methodologies can elucidate design spacesfor living with things that are not exclusive to human utility. Specif-ically, the Morse Things do not register their interactions withhumans, but rather are designed to communicate amongst them-selves when awake, only sonically emitting Morse code (i.e., shortand long beeps) into their surroundings:“The Morse Things mostly sleep (computationallyspeaking) and wake at random intervals during theday at least once every eight hours. Upon waking aMorse Thing will send and receive messages to andfrom other Morse Things in its set. The messages sentby each Morse Thing are in Morse code and simultane-ously expressed sonically and broadcasted on Twitter[as cryptic encodings].” [72]In a subsequent reflection on the project, Oogjes et al. outlinefurther details of the design process, particularly the use of ML topromote a “thing-centered logic” in the Morse Things’ decision onactive periods based on how many other Things were successfullycommunicated with previously [57]. Specifically, a reinforcementlearning algorithm was used, taking incoming data to update a planof action over time (cf. [42, 75]). Through this description we can first sketch thehuman-technology relation with Morse Things as a backgroundrelation:
I (- Morse Things / Home)
Background Relation
While human domestic dwellers experience their home throughtheir daily routines, Morse Things lurk in the background, doingwhat they do. Morse Things cannot be urged to do what they do,nonetheless their activities form a backdrop to the domestic ex-perience, texturing the dweller’s perceptions of the home, whichbecomes layered with latent technological activity and opaquebeeping. This background relation was often made explicit by par-ticipants, who become quite involved with the (real or imagined)activity and purpose of the Morse Things. A typical example is howone of Wakkary et al.’s participants would “continue to keep tryingto grab the bowls while they are ‘tweeting’ [. . . ] Maybe I’ll be ableto tell them apart eventually.” Therefore, a more accurate render-ing of the human-technology relation in this mode is an alterity, aquasi-Other: I → Morse Thing ( - Home)
Alterity Relation
While the communication among Morse Things is ‘in itself’beyond the horizon of our experience, it clearly is a focus even inits opacity, and becomes a feature of the ‘objecthood’ of the MorseThing for Wakkary et al.’s participants, who actively probe theopaque communication. This tension constitutes the ‘gap’ betweenthings and us that Wakkary et al. find to be fruitful as a designspace for ambiguity and reflection. Participants of the study echoed https://doenjaoogjes.com/portfolio/morse-things/, accessed 09/15/2020. receive environment statesend at predicted best timeslot Reinforcement Learning adjust prediction
Figure 3: The ideal ‘waking-up’ time is predicted by eachMorse Thing based on the amount of communication dur-ing prior cycles; with adjustments after every activity. Theseprobabilistic ‘futures’ of the Morse Things creep into humanexperience. Photograph © Wakkary et al. [72] this sentiment through statements such as “that’s why I like theidea of something else, let them be themselves. Other stuff is goingon that we’re just totally unaware of and it doesn’t matter.”For our purposes, the crucial aspect is that while each MorseThing is randomly initiated into ‘waking up’, over time they learnfrom the most ‘successful’ phases of waking by logging the timeswhen communication with many other Morse Things occurred,and predicting the optimal ‘timeslot’ for waking using ML [57].Through reinforcement learning algorithms, each artefact updatesan internal model based on the prior ‘success’ of its actions inthe overall environment of other Morse Things’ activity (Figure 3).With the constantly updating model of an opportune timeslot, MLuncertainty becomes a particularly rich resource that allows forpotentially broadening the design research inquiry. First, we canschematize the timeslot model as a machine interpretation of howother Morse Things have previously acted in the world:
Timeslot → Morse Things
The model for a timeslot is based on a prediction of Morse Thingswaking and communication at a specific time, relating to all MorseThings’ specific interpretation of the ‘world’ as represented by allMorse Things’ activity. Next, we can use this schema to more pre-cisely outline how ML figures in the previously presented human-MorseThings relations:
I(- Morse Things / (Timeslot → Morse Things) / Home)
Composite Background Relation I → Morse Thing / (Timeslot → Morse Things)(- Home)
Composite Alterity Relation
Our denotation allows us to characterize how ML processes af-fect the given human-technology relation: it is explicitly relatedto time . Each Morse Thing learns about ‘ideal’ times individually,with the overall assumption that learning will converge in specific achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan times across all Morse Things. This constant attempt towards con-vergence invariably affects situated human-technology relations:participants wonder when a Morse Thing will act, and this texturesnot only the spatial surroundings of a home but also its temporalcharacteristics. Considering model uncertainty in this scenario, weterm the phenomenon of Morse Things’ involvement in humanexperience of time as a futures creep : the impact of predictionson situated human experience. Put differently, ML uncertainty al-lows us to explicitly denote how the thingly model of time impactsparticipants’ experience of Morse Things. There is not a singularprediction, but multiple entangled predictions of varying degreesof probability. Various models of a timeslot coalesce around thehuman experience of beeps and tweets; ‘thingly futurings’ thatcharacterize and at the same time are irreducible to human-artefactinteraction. Expanding on this design space could, for instance, seean expansion of the “animistic” [48] tendencies of Morse Thingsinto even more individualistic thingly futurings. For example, bydesigning different confidence thresholds for a Morse Thing to de-cide on a timeslot, or by representing uncertainty in modulatingthe frequency of beeps based on uncertainty. Futures creep, then,could be used to characterize a thingly uncertainty of actual arte-facts, further interrogating questions such as: How could materialspeculation artefacts be more explicitly designed around notions oftime and uncertainty? For example, investigating how ML-drivenproducts or services mediate relations to time in specific settings,design researchers could purposely pursue material speculation onintermediary, ‘time-keeping’ artefacts for futures creep.
Highwater Pants
In their speculative design project, Biggs andDesjardins design and deploy the artefact ‘Highwater Pants’—apair of pants whose legs lengthen or shorten based on whether thewearer is in an area threatened by predicted sea-level rise in thefuture [5]. The researchers mobilize speculative design to probe hu-man relations to possible futures of climate change, and argue thatthe Highwater Pants make such futures tangible as they “bend time.”The artefact was deployed with cyclists, as these have “embodiedand sensorial” knowledge about their environment and specificgarments are part of the cycling culture. The Highwater Pants areequipped with a fabric micro-controlling unit, which combines avariety of operations. First, it controls actuation of the pant legsinto an up or down state. Second, it houses a GPS module whichretrieves the current geographical position of the wearer. Third, itcompares the position to a set of polygons on a map, which havebeen curated by the researchers. Within the polygons, sea-level risein “30 to 50 years” was predicted using ML linear regression algo-rithms by the National Oceanic and Atmospheric Admistration (cf.[65]). Such algorithms extrapolate tendencies for values to increaselinearly based on previous data (cf. [9]); in this case, decades worthof temperature and longitude/latitude data.
Riding a bike is, classically, an embodiment relation.As we navigate the world, the bike itself withdraws, becomingpart of our extended bodily relationship to our environment. TheHighwater Pants, initially, also withdraw into this relation: (I - HighwaterPants - Bike)World
Embodiment Relation
Based on our prior analyses, the prediction model of the linearregression algorithms can be schematized as an interpretation ofan already thematized ‘slice’ of the world:
Prediction → [World] As Biggs and Desjardins were mindful of the uncertainty of MLpredictions, they “padded” their polygons so as to increase thezones of prediction and to give participants more opportunitiesfor reflection. When comparing the wearer’s GPS location withthe “geofencing” of the polygons, the Highwater Pants pants leg isrolled up or down. Both the Highwater Pants and the bike remainintimately tied to bodily-perceptual experience, but the embodiedperception of the environment is textured by likely future statesof the same environment. Biggs and Desjardins’ empirical findingsfrom deploying pairs of Highwater Pants with experienced localcyclists reflect this intimately. For instance, one participant museson how “it’d be a totally different experience living here without[the waterfront parks]” after the Highwater Pants indicated thatthese may disappear. Based on the schemata so far used, we maynote the entangling of an embodiment relation (with the bike and the Highwater Pants) with a probabilistic prediction of future sea-levels (i.e., a prediction of an already ‘thematized’ world) as follows: (I - HighwaterPants / (Prediction → [World]) - Bike) World Composite Embodiment Relation
Linear Regression correlate GPS to dataset adjust to prediction
Figure 4: The Highwater Pants correlates the GPS locationof the wearer with the dataset for sealevel-rise prediction,and adapts its activity to the relevant prediction. The futurescreep mediated by the Highwater Pants affects how bikersrelate to possible future states of their environment. Photo-graph © Ioan Butiu / Biggs and Desjardins [5]
Initially, the above denotation resurfaces the phenomenon of pat-tern leakage which we discussed with regards to smart cameras andBCIs above: both training data and predicted data are intrinsicallyuncertain, and the patterns of likely sea-level rise are projectedonto the ‘real’ world. However, the researchers’ empirical findingsalso suggest a further nuance to the Highwater Pants: participants
HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al. actively tried to find the boundaries of polygons in order to discoverwhere predictions lose their “time-bending” hold on the present.In their experience, participants noted how the Highwater Pantswould be “oscillating between up and down at geofence boundaries,creating a kind of anticipatory sensation in a liminal zone.” There-fore, the initial relation to the prediction model becomes more overtin the form of an alterity relation: (I - Bike) → HighwaterPants / (Prediction → [World]) (World) Composite Embodiment Alterity Relation
This complex multi-relationality of human-HighwaterPants in-teraction shows how the predicted post-sea-level rise world isprobed via the Highwater Pants against the backdrop of one’s ownbodily-perceptual relations with the environment, while simulta-neously being part of the human-bike embodiment relation. TheHighwater Pants as a quasi-Other becomes, in the words of Biggsand Desjardins, an intermediary “oracle, or translator, speakingfor/from an ecology-to-be,” which directly affected participant’s im-mediate bodily perception of their own future. This adds a furthervariant to our concept of futures creep: the ecology-to-be is notonly an ‘image’ of the future. Rather, the “future present” [11] ofinferred models, fluctuating with the data uncertainty of past mea-surements, becomes an implication of human perception and actionin that future. The time-bending phenomenon which Biggs andDesjardins explicate is thus not a technological operation triggeringa human response (as had been the case with the Morse Things’futures creep), but rather the injection of prediction into the humanphenomenological experience of time (cf. Figure 4). The way par-ticipants’ think of the future is co-shaped by the Highwater Pants’mediation of a prediction, and takes on a particular shape as a po-tentially endangered lifeworld–additionally reflecting questions onwhether my present is the one leading up to the realization of this prediction. The uncertainty of this prediction has a direct effect onhow someone imagines relations between their present and future,and their capacity in shaping this specific, technologically thema-tized time. This aspect of futures creep, then, can be used by designresearchers to further investigate how design artefacts shape andtransform relations between present and future. As futures creepis a phenomenon of ML uncertainty, design researchers can alsoquestion: What kind of positionality towards such temporal shapesdoes the variance of model and data uncertainty provoke, for whom,and towards which ends? Design research methodologies such asspeculative design or RtD can use this facet of futures creep foran inquiry into the specifics of time-bending by artefacts, for ex-ample investigating how model uncertainty could be deliberatelycalibrated by users to explore non-anthropocentric notions of time. Through our analyses, we derived three provocative concepts forML uncertainty: thingly uncertainty, pattern leakage and futurescreep. We summarize and define these for future research below.While we have separated out case studies into the phenomena wetried to make referable, it should be noted that they are not mutuallyexclusive and simply offer lenses that bend toward the specificphenomena generated by ML models. Furthermore, our analyseshave also surfaced potential research trajectories for more generalpost-phenomenological ML studies, which we address separately.
Data and model uncertainty are intrinsic and defining properties ofML. From an engineering or XAI perspective, uncertainty may beseen as a phenomenon to be curbed or explained to frame outputsmore unambiguously. But, as we have argued, ML uncertainty isalso a promising design research material: as an inherent attributeof contemporary ML, data and model uncertainty speak to thematerial involvement of ML decision-making with the world. Inwhat follows, we take a further modest step forward and articulateworking definitions of our concepts as provocative shorthands forfuture design research to productively engage with ML uncertaintyas a design material. We conclude this section by summarizing theirutility for design, and connecting our concepts to the emergingdiscourse of more-than-human design in HCI.
With thingly uncertainty, design re-searchers can go beyond human uncertainty about an artefact andengage how the uncertainty of an artefact can become generativeof specific, technologically mediated phenomena in the world. As ageneral concept, we posit that this a particularly powerful short-hand for ML-enabled artefacts. Again, uncertainty in this regard isnot a negative attribute, but simply part-and-parcel of the use ofprobabilistic techniques in ML. Ihde and Verbeek have previouslyelaborated on a “thingly” [34] or “material hermeneutics” [33, 67]:technological artefacts, through their material properties (e.g., affor-dances, representations, sensors, actuators), shape how the worldbecomes legible in specific ways. Thingly uncertainty, however, at-tributes more precisely the kind of agency that ML-driven artefactsexhibit. Rather than fixed, “scripted” (e.g., [1]) readings, ML-drivenartefacts can be much looser and uncertain about the legibility of theworld. As such they act and adapt within a continuum of relations totheir environment and the humans that experience them. Thinglyuncertainty can help design researchers to more directly explicatethe variance of both ML and human, and point to non-normativeways of how the human sees and is seen in human-ML relations:How do the entities, assets and attributes that define humans shiftacross types (i.e., data and model) and amplitudes (e.g., low or highthresholds) of ML uncertainty? What could a speculative set ofnorms based on such ML-mediated variances look like?For example, Wong et al.’s proposal of “infrastructural specula-tions” [78] can be extended through thingly uncertainty. Consid-ering speculative design and design fiction, Wong et al. proposethis concept to more closely define the role of ‘actual’ infrastruc-tures, like the socio-cultural, economic or operational micro- andmacro-infrastructures, that need to be in place for an artefact toexist. Often, such infrastructures will involve ML or advanced AIcapacities, and thingly uncertainty offers a means to consider thequantitative and qualitative particularities at the ‘joints’ of infras-tructural speculations. On the one hand, researchers can investigatehow thingly uncertainty will have to be explained, minimized or ig-nored for the object of research (e.g., a speculative scenario, practice,artefact) to exist. On the other hand, infrastructural speculationsalso gain a concrete technological dimension of variance withinlifeworlds: Where, for instance, can pattern leakage or futures creepoccur? Who or what needs to perform care, or is affected by, thesecomputational phenomena? Thingly uncertainty can thereby offer achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan a basis to consider actual phenomena generated and mediated byML for design research. This concept describes how ML uncertaintyaffects the ways in which objects, entities, events and people arerecognized in ML-driven systems; noting the propensity of proba-bilistic patterns to shape the world they are deployed to represent.Through our case studies, we found instances of pattern leakagerelated both to data and model uncertainty. Design research canappropriate this concept to probe both types of ML uncertainty inthe following ways.When investigating Wong et al.’s
When BCIs have APIs , we pro-posed that pattern leakage names how phenomenological expe-rience becomes affected by data uncertainty. Brainwave signalsbecome registered as P300 instances, yet due to data uncertaintyit is likely that the world becomes populated with ‘surplus’ P300instances. Future design research could therefore pay attention tohow data uncertainty leads to a data-driven ‘inclusivity’ of classi-fications or predictions that due to intent, oversight, subtlety oropaqueness bleed into specific socio-material constellations. Fo-cussing on pattern leakage due to data uncertainty, design researchcan more precisely reflect on the thresholds for participation inML-driven data ecologies. For instance, design fiction could usethis concept to avoid assumed linearities in future technologicalsettings, paying close attention to how ‘slippage’ in classification orprediction can not only lead to breakdowns, but rather be generativein its own right.In analyzing Pierce’s
Shifting Lines of Creepiness , we noted thatsmart cameras, through model uncertainty, may actively generatephenomena rather than passively register them. Learned patterns(e.g., a person in a restricted area) may leak onto events in the wild(e.g., a movie poster), and affect how things and humans see theworld. We propose that future design research take on this conceptto investigate how the intentionality of diverse ML algorithms (e.g.,artificial neural networks, SVMs) to read the world in a specificway is generative of distinct patterns leaking into situated human-technology relations. For example, an ML-driven IoT artefact couldhave various options for its algorithmic functionality. Researchersmay actively provoke and design for patterns to leak under differ-ent algorithmic choices. Deploying various artefacts, investigationscan then begin into whether and how distinct pattern leakage phe-nomena differentially affect, or become generative of unanticipated,human-technology relations with the ML-driven artefact.
Futures creep denotes how ML-driven arte-facts affect human relations to time through probabilistic, uncertainpredictions. The impact of technologies on human conceptions oftime is a complex issue of investigation, particularly when consider-ing situated manifestations of meta-concepts such as the “ontologiesof times” of specific eras (cf., [47]). However, as ML is fundamentallya technological approach to making predictions (e.g., of classifica-tion, recognition, translation) about data, the concept of futurescreep provides an opening into this more subtle side of ML fordesign research.With regards to model uncertainty, in our analysis of Wakkaryet al.’s
Morse Things we found that futures creep in ML-drivenartefacts denotes a specific, human-excluding side: when such arte-facts do what they do may be not directly correlated with human experience, yet that is precisely how the artefacts are humanlyinterpreted to have a specific character . In line with Marenko andvan Allen’s proposal for “animistic design” [48] research, this facetof futures creep can be used to inquire into how different kinds ofdata uncertainty thresholds for ML-driven artefacts mediate thecharacteristics that humans attribute to them. For instance, arte-facts could be purposely designed to exhibit animistic tendenciesby allowing for higher variance in data uncertainty for activity, andresearchers may investigate whether such technological decisionstranslate into mediations of particular artefactual ‘characters.’Model and data uncertainty combined showed a yet more subtlefacet of futures creep in Biggs and Desjardins’
Highwater Pants : theprediction of future sea-level rise was mediated by their speculativedesign artefact in such a way that it affected participant relations tothe present and possible futures, with participants actively probingthe range of predictions. The futures creep related to time-bending,i.e. directly affecting the shape of present and future, can be used toactively interrogate how and whether distinct ML algorithms andrespective forms of model uncertainty generate specific relationsto ‘temporal shapes.’ Specifically, futures creep may be used in“attending to temporal representations” [38] in design fiction. Forexample, researchers may interrogate whether a higher degreein variance concerning ML-driven predictions may bring aboutnovel political or civic norms in speculative scenarios, e.g. peoplechoosing to be in loose, variable relations to the future.Our provocative concepts can serve as a novel conceptual vocab-ulary to build design artifacts that provocatively engage with MLuncertainty; which allows for in-depth investigations of human-MLrelations throughout the design process. Designing for thingly un-certainty with futures creep and pattern leakage can shed light onhow human subjectivities become entangled with ML-driven arte-facts. This is not only a symbolic or aesthetic exercise, but rather apotentially powerful way of investigating how standard thinking onhuman-ML relations rely on normative assumptions (e.g., anthro-pocentric, capitalist, hetero-normative) about the technological asmuch as the human side of those relations. In this light, and echoingrecent calls for more-than-human design (cf., [13, 17, 46, 48, 55]),we propose that our concepts are readymade for research that takesseriously the role of non-human entities within design processesand products.
Post-phenomenology’s strengths lie in its “methodological post-humanism” [64], in interrogating how technological mediationaffects how humans perceive and act in the world. However, ML’sthingly uncertainty (i.e. agential capacities for perception, predic-tion and adaptation) and technical opacity seem to require furtherin-depth consideration. In our inquiry, the human-technology rela-tion schemata grew ever more complicated and convoluted as we in-vestigated the relationship of model-world and human-technologyrelations in our case studies; and the phenomenological differencebetween present artefacts and absent ML technologies was notentirely resolved. Specifically, we can see this in the use of thebackslash ( / ) to indicate both the background of experience as wellas the workings of ML in the background of an experienced artefact. HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al.
Whereas post-phenomenology is mostly focused on technologicalmediation in the here and now of the phenomenological horizon,the presented phenomena of ML uncertainty trouble this selectivefocus. As ML algorithms infer models from data representations ofthe world, they ‘populate’ the world that humans experience withready-made yet ultimately uncertain entities (e.g., music recommen-dations, likely traffic jams, people to follow, coasts to disappear).And as everyday phenomenological experience becomes texturedby probabilistic models, our capacities for perceiving and actingin such ‘probable’ worlds are shaped accordingly. Thus, while wemay not be aware of ML models’ involvement in technologicalmediation, our ways of relating to the world nonetheless become“imbricated” [23] (i.e., overlapping and co-extensive) with ML tech-nologies. Accordingly, ‘our’ phenomenological horizon, the matters and modes of perceiving, acting and sense-making, is textured byML’s thingly uncertainty. Investigating such horizonal relations canbecome a promising trajectory for post-phenomenological ML stud-ies, which we briefly sketch as follows. Our preliminary schemamirrors Goodfellow et al.’s use of the tilde operator ( ~ ) for MLinference [18]: I ∼ −→ ( Technology - [World] )ML~ World Horizonal Relations
Similar to background relations, horizonal relations recede froma specific interface or device that intentional human-technology re-lations are formed with (e.g., in a hermeneutic relation). And similarto composite relations, horizonal relations feature technologically-exclusive interpretations of the world. But more than that, in hori-zonal relations, human-technology relations are embedded withinML technologies’ specific capacity of being uncertain about theworld. Here, model-world relations infer a model ( ~ ) from the world-as-data ( [] ), which ‘converges’ with a particular human-technologyrelation. For example, the Facebook newsfeed algorithm is not onlyan operation on graph data, but implies a specific human way ofrelating to particular arrangements of data patterns. The hermeneu-tic relation that I take up with the Facebook newsfeed interfaceis therefore ‘textured’ by the uncertainties in predicting that datapattern. Horizonal relations may therefore be characterized as ahuman-technology vector associated with a given ML model, point-ing towards a specifically ‘thematized-by-data’ world. The ways inwhich model and data uncertainties of such relational ‘pointing’manifest (as e.g. pattern leakage and/or futures creep) in what weperceive, then, are the fundamental concerns of studying horizonalrelations. Further investigations involving diverse ML implemen-tations, can use the preliminary schema and concepts we havedeveloped to investigate the established human-technology rela-tions (e.g., Table 1); refining our schema and deriving associatedphenomena. With this paper, we aimed to make ML more tangible for designresearch through a post-phenomenological perspective on ML un-certainty. Our analyses have generated concepts to be used byresearchers, yet there are important areas of research that we havenot touched upon. A challenge to consider is how designing withML uncertainty may become an ethico-politically reflective practice beyond a possibly detached aestheticization of ‘glitchy’ ML tech-nologies. Verbeek stresses that post-phenomenology is particularlysuited to the anticipation of ethical issues. Future work deployingour developed concepts should therefore be especially attentive tohow ML uncertainty may play a role in the “hybrid moral agency”[66] constituted in the relationships of technology and people.
In this paper, we took a post-phenomenological lens to investigatedesign research projects for phenomena related to ML uncertainty.From our analyses, we generated three main procovative concepts.
Thingly uncertainty denotes a general characteristic of ML-drivenartefacts: the capacity for relating to the world along a variablecontinuum.
Pattern leakage describes the propensity for the learnedpatterns of ML models to be projected into the world.
Futures creep names the mediation of particular relations to the present and fu-ture of ML-driven artefacts. All concepts offer distinct opportunitiesfor design research to engage ML-driven technological mediation.We argue that these concepts offer a promising foothold for designresearch of ML technologies, which has been a difficulty for thefield. Additionally, we noted that the concepts derived from ourcase studies can also feed back into post-phenomenological MLstudies, adding a more precise description of how human inten-tionality co-extends and overlaps with ML capacities in the formof horizonal relations . As such, we offer a modest step forward fordesign research and post-phenomenology to engage with ML.
ACKNOWLEDGMENTS
We thank Peter-Paul Verbeek, Michael Nagenborg and ChristophKinkeldey for input and inspiration; Richmond Wong and DoenjaOogjes for clarification regarding their work; and our reviewers fortheir invaluable feedback. The authors of our case studies retaincopyright for image material used as stated.
REFERENCES [1] Madeleine Akrich. 1992. The De-Scription of Technical Objects. In
Shapingtechnology/Building Society : Studies in Sociotechnical Change , Wiebe E. Bijkerand John Law (Eds.).[2] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, BesmiraNushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen,Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In
Proceedings of the 2019 CHI Conference on Human Factorsin Computing Systems (CHI ’19) . ACM, New York, NY, USA, 3:1–3:13. https://doi.org/10.1145/3290605.3300233 event-place: Glasgow, Scotland Uk.[3] Josh Andres, Christine T. Wolf, Sergio Cabrero Barros, Erick Oduor, Rahul Nair,Alexander Kjærum, Anders Bech Tharsgaard, and Bo Schwartz Madsen. 2020.Scenario-based XAI for Humanitarian Aid Forecasting. In
Extended Abstractsof the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA’20) . Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3382903[4] Arne Berger, William Odom, Michael Storz, Andreas Bischof, Albrecht Kurze,and Eva Hornecker. 2019. The Inflatable Cat: Idiosyncratic Ideation of SmartObjects for the Home. In
Proceedings of the 2019 CHI Conference on Human Factorsin Computing Systems (CHI ’19) . Association for Computing Machinery, Glasgow,Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300631[5] Heidi R. Biggs and Audrey Desjardins. 2020. High Water Pants: DesigningEmbodied Environmental Speculation. In
Proceedings of the 2020 CHI Conferenceon Human Factors in Computing Systems (CHI ’20) . Association for ComputingMachinery, Honolulu, HI, USA, 1–13. https://doi.org/10.1145/3313831.3376429[6] Christopher M. Bishop. 2006.
Pattern recognition and machine learning . Springer,New York.[7] Christopher J C Burges. 1998. A Tutorial on Support Vector Machines for PatternRecognition.
Data Mining and Knowledge Discovery achine Learning Uncertainty as a Design Material CHI ’21, May 8–13, 2021, Yokohama, Japan [8] Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machinelearning algorithms.
Big Data & Society
3, 1 (Jan. 2016), 205395171562251. https://doi.org/10.1177/2053951715622512[9] T. Tony Cai and Peter Hall. 2006. Prediction in functional linear regression.
TheAnnals of Statistics
34, 5 (2006), 2159–2179. Publisher: Institute of MathematicalStatistics.[10] Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX DesignInnovation: Challenges for Working with Machine Learning As a Design Material.In
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17) . ACM, New York, NY, USA, 278–288. https://doi.org/10.1145/3025453.3025739 event-place: Denver, Colorado, USA.[11] Elena Esposito. 2013. The structures of uncertainty: performativity and unpre-dictability in economic operations.
Economy and Society
42, 1 (Feb. 2013), 102–129.https://doi.org/10.1080/03085147.2012.687908[12] Craig R. Fox and Gülden Ülkümen. 2011. Distinguishing two dimensions ofuncertainty.
Perspectives on thinking, judging, and decision making (2011), 21–35.[13] Christopher Frauenberger. 2019. Entanglement HCI The Next Wave?
ACMTransactions on Computer-Human Interaction (TOCHI) (Nov. 2019). https://dl.acm.org/doi/abs/10.1145/3364998[14] Bill Gaver and John Bowers. 2012. Annotated portfolios.
Interactions
19, 4 (July2012), 40–49. https://doi.org/10.1145/2212877.2212889[15] William W. Gaver, Andrew Boucher, Sarah Pennington, and Brendan Walker.2004. Cultural Probes and the Value of Uncertainty.
Interactions
11, 5 (Sept. 2004),53–56. https://doi.org/10.1145/1015530.1015555[16] William W. Gaver, John Bowers, Andrew Boucher, Hans Gellerson, Sarah Pen-nington, Albrecht Schmidt, Anthony Steed, Nicholas Villars, and Brendan Walker.2004. The Drift Table: Designing for Ludic Engagement. In
CHI ’04 ExtendedAbstracts on Human Factors in Computing Systems (CHI EA ’04) . ACM, New York,NY, USA, 885–900. https://doi.org/10.1145/985921.985947 event-place: Vienna,Austria.[17] Elisa Giaccardi, Nazli Cila, Chris Speed, and Melissa Caldwell. 2016. ThingEthnography: Doing Design Research with Non-Humans. In
Proceedings of the2016 ACM Conference on Designing Interactive Systems (DIS ’16) . Association forComputing Machinery, Brisbane, QLD, Australia, 377–387. https://doi.org/10.1145/2901790.2901905[18] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016.
Deep Learning
The Taming of Chance (1 edition ed.). Cambridge UniversityPress, Cambridge England ; New York.[21] Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The UnreasonableEffectiveness of Data.
IEEE Intelligent Systems
24, 2 (March 2009), 8–12. https://doi.org/10.1109/MIS.2009.36[22] Leif Hancox-Li. 2020. Robustness in machine learning explanations: does itmatter?. In
Proceedings of the 2020 Conference on Fairness, Accountability, andTransparency (FAT* ’20) . Association for Computing Machinery, Barcelona, Spain,640–647. https://doi.org/10.1145/3351095.3372836[23] Mark B. N. Hansen. 2015.
Feed-forward : on the future of twenty-first-centurymedia . University of Chicago Press, Chicago ; London.[24] Steve Harrison, Phoebe Sengers, and Deborah Tatar. 2011. Making epistemologi-cal trouble: Third-paradigm HCI as successor science.
Interacting with Computers
23, 5 (Sept. 2011), 385–392. https://doi.org/10.1016/j.intcom.2011.03.005[25] Sabrina Hauser, Doenja Oogjes, Ron Wakkary, and Peter-Paul Verbeek. 2018. AnAnnotated Portfolio on Doing Postphenomenology Through Research Products.In
Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18) . ACM,New York, NY, USA, 459–471. https://doi.org/10.1145/3196709.3196745[26] Sabrina Hauser, Ron Wakkary, William Odom, Peter-Paul Verbeek, Audrey Des-jardins, Henry Lin, Matthew Dalton, Markus Schilling, and Gijs de Boer. 2018.Deployments of the Table-non-table: A Reflection on the Relation Between The-ory and Things in the Practice of Design Research. In
Proceedings of the 2018 CHIConference on Human Factors in Computing Systems (CHI ’18) . ACM, New York,NY, USA, 201:1–201:13. https://doi.org/10.1145/3173574.3173775[27] Patrick Hebron. 2016.
Machine Learning for Designers
Being and time . State University of New York Press,Albany. OCLC: ocn608297834.[29] Drew Hemment, Ruth Aylett, Vaishak Belle, Dave Murray-Rust, Ewa Luger, JaneHillston, Michael Rovatsos, and Frank Broz. 2019. Experiential AI.
AI Matters
Proceedings of the 2019 CHI Conferenceon Human Factors in Computing Systems (CHI ’19) . ACM, New York, NY, USA,579:1–579:13. https://doi.org/10.1145/3290605.3300809 event-place: Glasgow,Scotland Uk.[31] Stacy Hsueh, Sarah Fdili Alaoui, and Wendy E. Mackay. 2019. UnderstandingKinaesthetic Creativity in Dance. In
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19) . ACM, New York, NY, USA, 511:1–511:12. https://doi.org/10.1145/3290605.3300741 event-place: Glasgow, ScotlandUk.[32] Kristina Höök and Jonas Löwgren. 2012. Strong concepts: Intermediate-levelknowledge in interaction design research.
ACM Transactions on Computer-HumanInteraction
19, 3 (Oct. 2012), 23:1–23:18. https://doi.org/10.1145/2362364.2362371[33] Don Ihde. 1990.
Technology and the Lifeworld: From Garden to Earth . IndianaUniversity Press, Bloomington and Indianapolis.[34] Don Ihde. 1997. Thingly hermeneutics/Technoconstructions. In
Hermeneuticsand the Natural Sciences , Robert P. Crease (Ed.). Springer Netherlands, Dordrecht,111–123. https://doi.org/10.1007/978-94-009-0049-3_7[35] Alex Kendall and Yarin Gal. 2017. What Uncertainties Do We Need in BayesianDeep Learning for Computer Vision? arXiv:1703.04977 [cs] (Oct. 2017). http://arxiv.org/abs/1703.04977 arXiv: 1703.04977.[36] Christoph Kinkeldey, Tim Korjakow, and Jesse Josua Benjamin. 2019. TowardsSupporting Interpretability of Clustering Results with Uncertainty Visualization.The Eurographics Association. https://doi.org/10.2312/trvis.20191183[37] Armen Der Kiureghian and Ove Ditlevsen. 2009. Aleatory or epistemic? Does itmatter?
Structural Safety
31, 2 (March 2009), 105–112. https://doi.org/10.1016/j.strusafe.2008.06.020[38] Sandjar Kozubaev, Chris Elsden, Noura Howell, Marie Louise Juul Søndergaard,Nick Merrill, Britta Schulte, and Richmond Y. Wong. 2020. Expanding Modes ofReflection in Design Futuring. In
Proceedings of the 2020 CHI Conference on HumanFactors in Computing Systems (CHI ’20) . Association for Computing Machinery,New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376526[39] Yongchan Kwon, Joong-Ho Won, Beom Joon Kim, and Myunghee Cho Paik.2018. Uncertainty quantification using Bayesian neural networks in classification:Application to ischemic stroke lesion segmentation. Amsterdam, The Netherlands,13.[40] Gierad Laput, Yang Zhang, and Chris Harrison. 2017. Synthetic Sensors: TowardsGeneral-Purpose Sensing. In
Proceedings of the 2017 CHI Conference on HumanFactors in Computing Systems (CHI ’17) . Association for Computing Machinery,Denver, Colorado, USA, 3986–3999. https://doi.org/10.1145/3025453.3025773[41] Lucian Leahu. 2016. Ontological Surprises: A Relational Perspective on MachineLearning. In
Proceedings of the 2016 ACM Conference on Designing InteractiveSystems (DIS ’16) . ACM, New York, NY, USA, 182–186. https://doi.org/10.1145/2901790.2901840[42] Jae Won Lee. 2001. Stock price prediction using reinforcement learning. In
ISIE2001. 2001 IEEE International Symposium on Industrial Electronics Proceedings (Cat.No.01TH8570) , Vol. 1. 690–695 vol.1. https://doi.org/10.1109/ISIE.2001.931880[43] Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: InformingDesign Practices for Explainable AI User Experiences. In
Proceedings of the 2020CHI Conference on Human Factors in Computing Systems (CHI ’20) . Associationfor Computing Machinery, Honolulu, HI, USA, 1–15. https://doi.org/10.1145/3313831.3376590[44] Youn-kyung Lim, Sang-Su Lee, and Da-jung Kim. 2011. Interactivity attributesfor expression-oriented interaction design.
International Journal of Design
5, 3(2011).[45] Zachary C. Lipton. 2016. The Mythos of Model Interpretability. arXiv:1606.03490[cs, stat] (June 2016). http://arxiv.org/abs/1606.03490 arXiv: 1606.03490.[46] Susan Loh, Marcus Foth, Glenda Amayo Caldwell, Veronica Garcia-Hansen, andMark Thomson. 2020. A more-than-human perspective on understanding theperformance of the built environment.
Architectural Science Review
63, 3-4 (July2020), 372–383. https://doi.org/10.1080/00038628.2019.1708258 Publisher: Taylor& Francis _eprint: https://doi.org/10.1080/00038628.2019.1708258.[47] Davor Löffler. 2018. Distributing Potentiality. Post-capitalist Economies and theGenerative Time Regime.
Identities: Journal for Politics, Gender and Culture
Digital Creativity
The bulletin of mathematical biophysics
5, 4 (Dec.1943), 115–133. https://doi.org/10.1007/BF02478259[50] Dan McQuillan. 2018. Data Science as Machinic Neoplatonism.
Philosophy &Technology
31, 2 (June 2018), 253–272. https://doi.org/10.1007/s13347-017-0273-3[51] Nick Merrill and John Chuang. 2018. From Scanning Brains to Reading Minds:Talking to Engineers about Brain-Computer Interface. In
Proceedings of the 2018CHI Conference on Human Factors in Computing Systems - CHI ’18 . ACM Press,Montreal QC, Canada, 1–11. https://doi.org/10.1145/3173574.3173897[52] Nick Merrill, John Chuang, and Coye Cheshire. 2019. Sensing is Believing: WhatPeople Think Biosensors Can Reveal About Thoughts and Feelings. In
Proceedingsof the 2019 on Designing Interactive Systems Conference - DIS ’19 . ACM Press, SanDiego, CA, USA, 413–420. https://doi.org/10.1145/3322276.3322286[53] Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the SocialSciences. arXiv:1706.07269 [cs] (June 2017). http://arxiv.org/abs/1706.07269 arXiv:1706.07269.
HI ’21, May 8–13, 2021, Yokohama, Japan Benjamin, et al. [54] Masaki Nakada, Han Wang, and Demetri Terzopoulos. 2017. AcFR: Active FaceRecognition Using Convolutional Neural Networks. In . 35–40. https://doi.org/10.1109/CVPRW.2017.11 ISSN: 2160-7516.[55] Iohanna Nicenboim, Elisa Giaccardi, Marie Louise Juul Søndergaard, Anu-radha Venugopal Reddy, Yolande Strengers, James Pierce, and Johan Redström.2020. More-Than-Human Design and AI: In Conversation with Agents. In
Com-panion Publication of the 2020 ACM Designing Interactive Systems Conference(DIS’ 20 Companion) . Association for Computing Machinery, New York, NY, USA,397–400. https://doi.org/10.1145/3393914.3395912[56] Doenja Oogjes, William Odom, and Pete Fung. 2018. Designing for an other Home:Expanding and Speculating on Different Forms of Domestic Life. In
Proceedingsof the 2018 Designing Interactive Systems Conference (DIS ’18) . Association forComputing Machinery, Hong Kong, China, 313–326. https://doi.org/10.1145/3196709.3196810[57] Doenja Oogjes, Ron Wakkary, Henry Lin, and Omid Alemi. 2020. Fragile! Handlewith Care: The Morse Things. In
Proceedings of the 2020 ACM Designing InteractiveSystems Conference (DIS ’20) . Association for Computing Machinery, Eindhoven,Netherlands, 2149–2162. https://doi.org/10.1145/3357236.3395584[58] James Pierce. 2019. Smart Home Security Cameras and Shifting Lines of Creepi-ness: A Design-Led Inquiry. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems (CHI ’19) . ACM, New York, NY, USA, 45:1–45:14.https://doi.org/10.1145/3290605.3300275 event-place: Glasgow, Scotland Uk.[59] A. Rakotomamonjy, V. Guigue, G. Mallet, and V. Alvarado. 2005. Ensemble ofSVMs for Improving Brain Computer Interface P300 Speller Performances. In
Artificial Neural Networks: Biological Inspirations – ICANN 2005 (Lecture Notes inComputer Science) , Włodzisław Duch, Janusz Kacprzyk, Erkki Oja, and SławomirZadrożny (Eds.). Springer, Berlin, Heidelberg, 45–50. https://doi.org/10.1007/11550822_8[60] Johan Redström. 2005. On Technology as Material in Design.
De-sign Philosophy Papers
3, 2 (June 2005), 39–54. https://doi.org/10.2752/144871305X13966254124275[61] Johan Redström and Heather Wiltse. 2015. Press Play: Acts of defining (in) fluidassemblages.
Nordes
PostphenomenologicalInvestigations: Essays on Human-Technology Relations . Lexington Books, Lanham,MD.[63] Frank Rosenblatt. 1958. The perceptron: a probabilistic model for informationstorage and organization in the brain.
Psychological review
65, 6 (1958), 386–408.Publisher: American Psychological Association.[64] Tamar Sharon. 2013.
Human nature in an age of biotechnology: the case formediated posthumanism . Springer Berlin Heidelberg, New York, NY.[65] William V. Sweet, Greg Dusek, Jayantha Obeysekera, and John J. Marra. 2018.
Patterns and Projections of High Tide Flooding Along the U.S. Coastline Using a Com-mon Impact Threshold . Technical Report 086. National Oceanic and AtmosphericAdministration, Maryland, USA.[66] Peter-Paul Verbeek. 2006. Materializing Morality: Design Ethics and Technologi-cal Mediation.
Science, Technology, & Human Values
31, 3 (May 2006), 361–380.https://doi.org/10.1177/0162243905285847[67] Peter-Paul Verbeek. 2006.
What things do : philosophical reflections on technology,agency, and design . Penn State Press, University Park.[68] Peter-Paul Verbeek. 2008. Cyborg intentionality: Rethinking the phenomenologyof human–technology relations.
Phenomenology and the Cognitive Sciences
7, 3(Sept. 2008), 387–395. https://doi.org/10.1007/s11097-008-9099-x[69] Peter-Paul Verbeek. 2015. Beyond Interaction: A Short Introduction to MediationTheory. interactions
22, 3 (April 2015), 26–31. https://doi.org/10.1145/2751314[70] Peter-Paul Verbeek. 2015. Designing the Public Sphere: Information Technologiesand the Politics of Mediation. In
The Onlife Manifesto: Being Human in a Hyper-connected Era , Luciano Floridi (Ed.). Springer International Publishing, Cham,217–227. https://doi.org/10.1007/978-3-319-04093-6_21[71] Ron Wakkary, William Odom, Sabrina Hauser, Garnet Hertz, and Henry Lin.2015. Material Speculation: Actual Artifacts for Critical Inquiry.
Aarhus Series onHuman Centered Computing
1, 1 (Oct. 2015), 12. https://doi.org/10.7146/aahcc.v1i1.21299[72] Ron Wakkary, Doenja Oogjes, Sabrina Hauser, Henry Lin, Cheng Cao, Leo Ma,and Tijs Duel. 2017. Morse Things: A Design Inquiry into the Gap Between Thingsand Us. In
Proceedings of the 2017 Conference on Designing Interactive Systems (DIS’17) . ACM, New York, NY, USA, 503–514. https://doi.org/10.1145/3064663.3064734[73] Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. DesigningTheory-Driven User-Centric Explainable AI. In
Proceedings of the 2019 CHI Confer-ence on Human Factors in Computing Systems (CHI ’19) . ACM, New York, NY, USA,601:1–601:15. https://doi.org/10.1145/3290605.3300831 event-place: Glasgow,Scotland Uk.[74] Norbert Wiener. 1961.
Cybernetics or control and communication in the animaland the machine (2. ed ed.). MIT Press, Cambridge, Mass. OCLC: 990311734.[75] Ronald J Williams. 1992. Simple statistical gradient-following algorithms forconnectionist reinforcement learning.
Machine Learning
Proceedings of the 24th International Conference on Intelligent UserInterfaces (IUI ’19) . Association for Computing Machinery, New York, NY, USA,252–257. https://doi.org/10.1145/3301275.3302317[77] Jason Shun Wong. 2018. Design and fiction: imagining civic AI.
Interactions
25, 6(Oct. 2018), 42–45. https://doi.org/10.1145/3274568[78] Richmond Y. Wong, Vera Khovanskaya, Sarah E. Fox, Nick Merrill, and PhoebeSengers. 2020. Infrastructural Speculations: Tactics for Designing and Interrogat-ing Lifeworlds. (April 2020). https://doi.org/10.1145/3313831.3376515[79] Richmond Y. Wong, Nick Merrill, and John Chuang. 2018. When BCIs Have APIs:Design Fictions of Everyday Brain-Computer Interface Adoption. In
Proceedingsof the 2018 Designing Interactive Systems Conference (DIS ’18) . ACM, New York,NY, USA, 1359–1371. https://doi.org/10.1145/3196709.3196746[80] Qian Yang, Nikola Banovic, and John Zimmerman. 2018. Mapping MachineLearning Advances from HCI Research to Reveal Starting Places for DesignInnovation. In
Proceedings of the 2018 CHI Conference on Human Factors inComputing Systems (CHI ’18) . ACM, New York, NY, USA, 130:1–130:11. https://doi.org/10.1145/3173574.3173704 event-place: Montreal QC, Canada.[81] Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018.Investigating How Experienced UX Designers Effectively Work with MachineLearning. In
Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18) . Association for Computing Machinery, New York, NY, USA, 585–596.https://doi.org/10.1145/3196709.3196730[82] Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. GroundingInteractive Machine Learning Tool Design in How Non-Experts Actually BuildModels. In