Creepy Technology: What Is It and How Do You Measure It?
Pawe? W. Woźniak, Jakob Karolus, Florian Lang, Caroline Eckherth, Johannes Schöning, Yvonne Rogers, Jasmin Niess
CCreepy Technology:What Is It and How Do You Measure It?
Paweł W. Woźniak
Utrecht UniversityUtrecht, the [email protected]
Jakob Karolus
LMU MunichMunich, [email protected]
Florian Lang
LMU MunichMunich, [email protected]
Caroline Eckherth
LMU MunichMunich, [email protected]
Johannes Schöning
University of BremenBremen, [email protected]
Yvonne Rogers
University College LondonLondon, United KingdomUniversity of BremenBremen, [email protected]
Jasmin Niess
University of BremenBremen, [email protected]
ABSTRACT
Interactive technologies are getting closer to our bodies and per-meate the infrastructure of our homes. While such technologiesoffer many benefits, they can also cause an initial feeling of uneasein users. It is important for Human-Computer Interaction to man-age first impressions and avoid designing technologies that appearcreepy. To that end, we developed the Perceived Creepiness of Tech-nology Scale (PCTS), which measures how creepy a technologyappears to a user in an initial encounter with a new artefact. Thescale was developed based on past work on creepiness and a set often focus groups conducted with users from diverse backgrounds.We followed a structured process of analytically developing andvalidating the scale. The PCTS is designed to enable designers andresearchers to quickly compare interactive technologies and ensurethat they do not design technologies that produce initial feelingsof creepiness in users.
CCS CONCEPTS • Human-centered computing → HCI design and evaluationmethods . KEYWORDS creepiness; creepy; first impression; evaluation; scale; questionnaire;perceived creepiness of technology scale
ACM Reference Format:
Paweł W. Woźniak, Jakob Karolus, Florian Lang, Caroline Eckherth, Jo-hannes Schöning, Yvonne Rogers, and Jasmin Niess. 2021. Creepy Technol-ogy: What Is It and How Do You Measure It?. In
CHI Conference on Human
CHI ’21, May 8–13, 2021, Yokohama, Japan © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in
CHI Conferenceon Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan ,https://doi.org/10.1145/3411764.3445299.
Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan.
ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3411764.3445299
When creating novel interactive technologies, be it for research orpractical purposes, managing first impressions is key [22, 53, 63]. Atechnology that looks intimidating, scary or unpleasant is unlikelyto engage the user’s willingness to interact with it. This challengebecomes even more salient when dealing with technologies thatreflect recent trends in Human-Computer Interaction (HCI) such aswearable computing [32] or sensory amplification [67]. While suchtrends promise attractive technological futures, they also envisionmany technologies that could initially be perceived negatively. As aconsequence, designers of future technologies need to understandhow to build technologies that offer positive first impressions. Thisgives rise to the need for methods that would enable designers tocompare alternative prototypes in terms of first impressions.The HCI field has a history of studying technologies that usersperceive as potentially creepy. Privacy research used the term creepy to describe technologies that were perceived as potentially en-croaching on the users’ privacy, e.g. [61]. Research in avatars androbots equated creepy with uncanny [24] to denote humanoid rep-resentations that produce a certain unease in users. However, theuse of the term was not exclusive to these two domains. Past re-search has reported that self-driving cars [49], traffic lights [57],headphones [18], voice assistants [69] or toilets [40] could also becreepy.More and more people are beginning to encounter creepy tech-nologies in their everyday lives. Newspapers report about tech-nologies from dating apps [7] to speakers [13] being perceived ascreepy. A recent concern is facial recognition technology, which isbeginning to be deployed in physical and online stores to help re-tailers improve how they customise the shopping experience. SmartAI platforms are emerging that can detect at a glance our gender, a r X i v : . [ c s . H C ] F e b HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al. race, approximate age, where and how long we have been lookingat something, and in what emotional state we are [23]. Further,creepy technologies are not only developed by corporations butalso by users themselves. Modern development tools enable profi-cient users to develop personal apps such as a chatbot for talkingto a late friend based on past conversations [42]. In this case, eventhe creator of the chatbot was concerned that the application couldbe creepy. As users are increasingly likely to experience creepinessin everyday interactions, HCI needs to understand more about thephenomenon to minimise the negative impact of future interactivetechnologies.To this end, this paper explores the qualities in technology thatgive users the heebie-jeebies. We propose a structured framing ofthe creepiness of interactive technologies and proposes an accom-panying measurement instrument—the Perceived Creepiness ofTechnology Scale (PCTS). We followed a structured scale develop-ment process where we first formed a conceptual understandingof creepiness, followed by empirically building the scale using theguidelines collected by Boateng et al. [4]. We first investigated pastwork in HCI and identified papers which reported technologiesbeing creepy outside of the humanoid or robotics fields. To empiri-cally explore how users think about creepiness, we then conducteda set of ten focus groups where users expressed their opinions aboutpotentially creepy technologies. Based on the literature and ouranalysis of the focus group content, we then proposed a generalframing of the concept for HCI. The elements of the model servedas an inspiration to generate initial items for the scale, which werethen subjected to an expert review. We used exploratory factoranalysis to reduce the number of items and obtain the final scale,which was then validated in a number of evaluation assessments.For an overview of our scale development process, see Figure 1.Our work offers the first, to our knowledge, conceptualisation ofcreepiness in HCI and a validated scale for assessing the creepinessof interactive artefacts.
To frame our inquiry, we first chart the use of the concept of creepi-ness and the adjective creepy in past research. We report on howthe terms were used in privacy research and Human-Robot Interac-tion. We then investigate the relationship between creepiness andacceptability. Finally, we report on how creepiness was ascribed tonon-humanoid digital artefacts in past HCI research.
Studies in privacy of personal technologies have extensively usedthe term creepy to refer to technologies that are perceived as po-tentially threatening privacy. Creepiness was particularly ascribedto technologies that collect personal information about their users.Pierce [61] researched speculative scenarios for home cameras andconcluded that minute details in the design of home devices ledto different levels of creepiness. This work calls for unpacking thereasons behind creepiness and shows how creepy a device is per-ceived is influenced by diverse factors. Zhang et al. [82] studiedhow targeted advertisements evoked feelings of creepiness in users.The study focused on the consequences of this on the use of socialmedia and not the sources of creepiness per se . Also, in the context of advertising, Ur et al. [77] reported that creepiness was associatedwith the feeling of being followed. Importantly for our understand-ing of creepiness, Shklovski et al. [70] found that creepiness was notconnected to an anticipated negative end result of using a technol-ogy while studying mobile app use. In this context, both Sklovski etal. and Phelan et al. [60] equated creepy with disturbing . The laterpaper underlined the intuitiveness of the concept with creepiness( intuitive concern ) being less rational than considered concern . Otherresearch also reported creepiness in the context of privacy violationwhen users were involved in unsolicited meetings on social me-dia [2] and crowdworking [21]. The examples listed here are just afew, illustrating the breadth of the use of the term creepy in privacyresearch. While this body of research addresses a broad scope ofapplications, it shares a common understanding of creepiness as an,often unspoken and innate, anticipation of the technology violatingethical principles held by the user. Our work is inspired by accountsof creepiness in privacy research. We aim, however, to broaden thescope of understanding creepiness beyond privacy concerns.
In research on virtual avatars and Human-Robot Interaction (HRI)the notion of creepiness is primarily associated with ‘uncanniness’and the uncanny valley phenomenon, i.e. unsettling feelings expe-rienced by someone elicited by an artefact’s spooky resemblanceto a human being or other animate beings. Schwind et al. [68]reported how some representations of the users’ hands evoked feel-ings of creepiness. Early HRI research showed that creepiness wasprevalent when robots offered emergency help [51]. Lin et al. [39]reported that parents wanted to explicitly limit the creepiness ofrobots when they allowed them to interact with their children. Therepresentation of faces for both robots [26] and virtual avatars [44]elicited feelings of creepiness related to the mismatch betweentheir appearance and the user’s expectations. Löffler et al. [41] pro-posed a different interpretation of creepiness. They used a scalewhere creepy was the opposite of friendly to assess the percep-tion of animal-like robots. Creepiness has also been viewed as apragmatic concern, lowering the effectiveness of interaction whenrobotic assistants helped with analytical tasks [71]. HRI work hascontributed a number of understandings of creepiness and identi-fied technologies being potentially creepy as a key concern whenbuilding new interactive artefacts. In this paper, we extend the no-tion of creepiness based on HRI experiences and broaden the scopeof potentially creepy technologies beyond robots and human-likeartefacts.
Previous research in HCI has often associated the term creepy withsocial unacceptability. For example, the WEAR scale [32] explic-itly used the adjective creepy as a contribution to the acceptabilityscale. Consequently, it might appear that creepiness is a subor-dinate concept to acceptability. This, however, is in conflict withpast work discussed above, which reported users willingly usingtechnologies despite their creepiness, e.g. [60]. In fact, in Koelle etal.’s [33] review of social acceptability research, the WEAR scale isthe only mention of creepiness. Thus, related work suggests that reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan
Figure 1: The scale development process which we followed in this paper. The workflow is a selection of development andevaluation methods suggested by Boateng et al. [4]. creepiness is a distinct concept from acceptability. Here, we inves-tigate creepiness as one’s personal perception of an artefact, whichis different from acceptability, which is understood as the lack ofnegative reactions from others [32, 33]. Thus, while acceptabilityhas an inherently social dimension [75], we found no previous workthat would suggest that creepiness is necessarily social.We identified only one paper in HCI which investigated thenotion of creepiness in an explicit manner. Yip et al. [81] studiedperceptions of creepiness in children interacting with technology.They found that the key factors contributing to creepiness were:‘deception, lack of control, mimicry, ominous physical appearance,and unpredictability’. They based their inquiry on the socionorma-tive formulation of creepiness by Tene and Polonetsky [73]. Theresults, however, show that creepiness with regard to technologyis beyond social norms. The different aspects of creepiness identi-fied by Yip et al. serve as a starting point of our inquiry. The goalof our research is to develop a more structured understanding ofthese dimensions and a measurement instrument that facilitatescomparison in terms of creepiness.
Having established the two domains where the term creepy waspresent, we decided to investigate what other, i.e. non-robot andoutside of privacy research, technologies studied in HCI were re-ported to be creepy. To this end, we conducted a literature review inthe ACM Digital Library. We used the query ‘ creepy NOT privacyNOT robot ’ , which resulted in 178 papers in SIGCHI sponsoredconferences and an additional 9 papers in the ToCHI journal. Wethen reviewed all the papers and decided to exclude publicationswhich: (1) used the verb to creep in a figurative sense or to denotemovement, (2) referred to creep as a term in materials science, (3)discussed feature creep —a phenomenon in software developmentand (4) used the term creepy as part of a citation from prior work.This filtering process yielded 31 papers, which we then open-codedto identify key domains where research reported creepy technolo-gies. While the full results of the review are beyond the scope ofthis paper, we report here the main areas which we identified withselected examples.Unsurprisingly, the largest group of papers consisted of paperswhere the design intention was to make the user feel a certainunease. Exploration through provocative art pieces [27, 36, 55] orunconventional artefacts [1, 57, 83] was a prevalent theme in thereviewed corpus. The reported research illustrates how creepinessis an aspect of interactive technologies which designers explicitly Note that the new ACM DL includes derivative forms of the word, thus, e.g. creep , creeping and creepiness were included as keywords. consider, thus showing a need for a deeper understanding of theconcept. We note that all the artefacts in this group featured dif-fering levels of ambiguity [15]. This suggests that creepiness isconnected to not precisely knowing the nature of the artefact. In asimilar way, unconventional audio interactions [31, 65] led to notknowing what to expect from a technology and thus experiencingcreepiness. For many of these interactive technologies, creepinessmay not necessarily be a negative property.Creepiness when using mediated touch [12, 17, 25, 37, 56] alsofeatured highly in the literature. These papers are considered lessrelevant to the current paper as the users’ perception of uneasewhen using mediated touch was previously defined as disfordance by Mejia and Yarosh [46] and can be measured with a validatedscale. In contrast, our aim is to capture the concept of creepinessfor a larger class of artefacts, while building on the lessons learntfrom previous research.Earlier research has noted how interacting with technologiesthat could be assigned agency was also a source of creepiness.Studies describing interactions with voice assistants [59, 69] andwith autonomous cars [14, 49, 50] found that both were perceivedas creepy. These examples show that creepiness can be experiencedwhere artefacts take an assumed social presence and possibly violatenorms related to this presence. An example of this kind of creepinessis how someone feels when crossing the road in front of a driverlesscar [50].Some of the other papers reported that interactive technologiesthat have direct contact with our bodies can be perceived as creepy.In particular, creepiness has been reported for wearables [16, 18]and technologies that use physiological sensing [5, 28, 47, 48]. Theseworks offer two ways to frame creepiness. First, we see the notionof a certain magical element, i.e. providing insight one should nothave as in Merrill et al.’s [48] where EEG systems were perceivedas mind readers. Second, creepiness is also related to a perceptionof possible harm [47].Experiences of Augmented Reality [29, 52] were also potentialsources of creepiness. Ni et al. [52] reported on an AugmentedReality system for facilitating communication with physicians. Theyequated a creepy feeling with emotional discomfort .Additionally, we noted how the term creepy has often been usedby children [28, 31, 81]. This is explained by child developmentresearch, where it has been found that standards of creepiness areformed early in life [6]. Furthermore, we even found one paperthat reported on creepiness in interactions between the users of amakerspace [76] and one describing social media behaviour [38](with no direct connection to privacy). HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al.
The variety of findings revealed in our literature review demon-strates the need for a shared conceptual understanding of creepinesswithin the HCI community. We aim to address this gap by develop-ing a conceptual model of creepiness in HCI and a complementarymeasurement instrument.
The concept of creepiness has also been studied more broadly inthe social sciences. For example, Watt and Gallagher [79] studiedhow human faces can be defined as creepy. While their study isnot directly relevant to the creepiness of inanimate objects, theirfindings echoed some of the qualitative evidence in HCI wherecreepiness is linked to violation of norms and the perceived possi-bility of harm. McAndrew and Koehnke [43] used an online surveyto establish that unpredictability was a key factor in creepiness.This finding is a relevant aspect for our work as the potentiallycreepy technologies in HCI research also contained a certain je nesais quoi element. Given the prevalence of the term and its apparentimportance for the evaluation of certain classes of technologies, itwould be beneficial for HCI to develop a structured understandingof creepiness and the means to evaluate if interactive technologiesare creepy.The nearest operationalised concept that has been developed forunderstanding creepiness in an interactive technology context isLanger and König’s [35] CRoSS scale. This was designed to ratethe creepiness of situations and some of the examined situationsinvolved technology. However, Langer and König attributed creepi-ness purely to context. In contrast, our investigation focuses solelyon creepiness as a property of an interactive technology. By doingso, our objective is to assess creepiness as part of a design processas opposed to the context.
Our literature review revealed that creepiness has been explained inHCI as a multifaceted concept. Moreover, we found little agreementon what qualities of an artefact contributed to it being perceivedas creepy. In order to broaden our understanding of creepiness,we conducted a series of ten focus groups in which participantscommunicated their first impressions of technologies that couldbe considered creepy. After the first two focus groups, we refinedour focus group protocol. In the first two focus groups, we con-trasted different interactive technologies to determine which stimuliwere perceived as creepy. This informed our remaining eight focusgroups where we focused on one particular creepy technology.
Eight participants ( 𝑀 = . 𝑦, 𝑆𝐷 = . 𝑦, , . . 𝑁 = 𝑀 = . 𝑦, 𝑆𝐷 = . 𝑦 , 6 female, 6 male) and younger adults ( 𝑀 = . 𝑦, 𝑆𝐷 = . 𝑦 , 5 female, 7 male). Participants haddiverse occupations such as teachers, designers, public servants, stu-dents, engineers in different areas including IT, advertising, humanresources, political science and psychology. Most participants hada bachelor’s degree (41 . . . Table 1: Participant information for the second set of focusgroups, categorised by age group.Variables Younger adults Older adults
N % N %
Gender
Women
Men
Education
No degree
Completed apprenticeship
Bachelor’s degree, equivalent
Master’s degree, equivalent
Not specified, equivalent
Work sector (Producing) industry
Service sector
Public sector, equivalent
Education, equivalent
Social services, equivalent
Undergoing training, equivalent
The focus groups were divided into an exploratory phase (25 𝑚𝑖𝑛 )and a follow-up group interview (20 𝑚𝑖𝑛 ) moderated by one ofthe experimenters and accompanied by two short questionnairesquerying demographics, technology adoption and experience withwearable devices.In the first two focus groups, participants experienced a set offour technologies varying in aesthetics, comfort and perceived trust.We divided the technologies into conventional devices including theconsumer products Fitbit Flex 2 and the Empatica E4 . Additionally,we included two research prototypes; one showing real-time muscleactivity (EMG) through attached electrodes and the other a step-counter based on a pressure-sensitive shoe sole that was attached tothe participants’ shoes. Under the supervision of two experimenters,the participants took turns in trying out all four devices.In the follow-up group discussion, the moderator inquired aboutthe participants’ first impression of the presented devices. Furthertopics included their level of trust in the technologies, aspectsabout the interactive technologies that caught their interest andthe perceived interaction with the technologies. Lastly, the groupdiscussed potential usage scenarios. reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan Based on our findings in the first set of focus groups, the EMG-based technology triggered the most creepy feelings amongst theparticipants. Thus, we decided to elect the EMG-based device asstimulus for further inquiries. Three participants took part in eachof the eight focus groups after they had been introduced to theEMG prototype. After the introduction and an exploration phase,which lasted 10 minutes, the focus group started. Again, the fo-cus groups followed a semi-structured protocol and lasted 20 𝑚𝑖𝑛 .We inquired about general perception and interaction with thedevice. Furthermore, we explored concerns, fears and desires theparticipants have when interacting with unknown technology. Fol-lowing a ladder interview approach [19], we paid special attentionto adjectives associated with creepiness, such as creepy, unpleasant,strange, threatening, frightening. In such cases, the moderators ex-plored the topic further. The adjectives were adapted from relatedwork [35, 43, 52, 79, 81] with the help of the Oxford Thesaurus ofEnglish [78].
All focus group recordings were transcribed verbatim. Then, fourauthors of the paper open-coded a sample of 20% of the materialand conducted a discussion to establish an initial coding tree in linewith Blandford et al. [3]. We further used the factors contributing tocreepiness from our literature review as sensitising concepts [66] inthe analysis. The remaining data was distributed equally amongstthe four coders. We then iteratively refined the coding of the datathat resulted in the construction of three core themes that describedthe facets of creepiness—as reported by the focus group participants.These themes, together with the insights from our literature reviewform the foundation of our conceptual model of creepiness.
Our model of creepiness consists of three dimensions which de-scribe creepy experiences derived from our observations of previ-ous research and the accounts of creepiness derived from our focusgroups. The model consists of three dimensions: implied malice,undesirability and unpredictability. The model is shown in Figure 2.
This dimension describes perceived bad intentions communicatedthrough the design of a creepy technology. This represents a gen-eralisation of the understanding of creepy as violating privacy. Inour model, ‘implied malice’ ( ‘intention or desire to do evil or causeinjury’ [54]) described the perceived potential of an interactive arte-fact that violates principles, which are important to the user. Thisdimension is based primarily on reports of creepiness in privacyresearch and interactions with autonomous systems. Violating theuser’s value systems was also discussed in the focus groups:P8: (...) something completely new. Yes, I’m so scared,okay, somehow it’s too much (...). For example, whenAlexa came on the market, I thought it was supercreepy. I still think that it is scary and there are friendsof mine and I sometimes think that someone is listen-ing in, for example. (...) I wouldn’t buy that myself.
Undesirability in our model refers to users perceiving the interac-tive technology as a non sequitur; a feeling of unease caused bythe interactive artefact being out of context. The term ‘undesir-ability’ highlights the feeling of unease inherent to the artefact,which can be due to a variety of factors such as social context oraesthetic appearance. This dimension is based on McAndrew andKoehnke’s [43] research, adapted to the creepiness of inanimateobjects. Focus group participants reflected on negative social conse-quences of using the technologies with which they were interacting:P1: But I find it a bit creepy. Imagine you see a personwith it (...). P2: I would mainly be worried.Undesirability can also imply that the design aesthetic of theartefact does not match the environment in which it is presented tothe user. This dimension of creepiness is evident in the provocativetechnologies discussed above. This was mentioned by one focusgroup participant who discussed how the aesthetic of the technol-ogy did not match its context of use:P7: Yes. So there must be some serious reason why Ihave something like that. P9: So when you use some-thing like that; you mentioned suitability for everydayuse, [the way this looks] you can’t just walk aroundwith it.
In our model, we use the term ‘unpredictability’ to denote the neg-ative feelings connected to users not being able to anticipate theinteractive technologies’ actions and/or exhibit a desired level ofcontrol. In our model, ‘unpredictability’ refers to the inability of theuser to immediately operate and understand the device. Controlwas a key dimension in Yip et al.’s [81] work. Other works demon-strated that a perceived lack of control may lead to a perceptionof threat [57]. This dimension also covers the feelings elicited byusers not knowing the intended use of an artefact, as discussed byOozu et al. [55]. Unpredictability was also a concern for the focusgroup participants:P12: For example, I always think about the question,okay, what can the device actually do - the devices oftoday can do more and more. And you can no longerestimate the [functional] range, (...) I don’t even knowwhat it can do.
Having proposed an understanding of the creepiness of interac-tive artefacts, our next goal is to understand how to ascertain howcreepy a given system is. To this end, we decided to build a struc-tured questionnaire. A validated questionnaire would allow design-ers and researchers to compare artefacts in terms of creepinesslevels. Furthermore, through choosing scale items, we could gainadditional insight into how users understand creepiness.We used a structured process to develop our scale, based on themethods recommended by Boateng et al. [4]. Given the lack oflocal standards in HCI for developing questionnaires, our method
HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al.
Figure 2: A Conceptual Model of Creepiness. We built the model based on our literature review and qualitative data gatheredin the focus groups. The primary task of the model is to inform our design of a scale for measuring creepiness of technology. decisions were also influenced by Mejia and Yarosh’s [46] work ona related questionnaire designed specifically for use in HCI.
Four researchers participated in generating initial items for thescale. Researchers first worked independently, creating items basedon related work in the three dimensions and quotes from focusgroup participants. We then conducted a coordination meetingwhere all the generated items were merged and discussed. Afterremoving duplicates and near-duplicates, we obtained an initial listof 47 items.
For the next step, we contacted four experts to provide their feed-back on the list of possible scale items. We chose a diverse set ofexperts to gather broad feedback. The experts were a professor inuser modelling, a researcher in machine learning, a researcher inpsychology and a user experience lead at a major software company.They provided feedback through commenting on existing itemsand suggesting new items. Having obtained the feedback, we built atable where we identified problematic items and discussed possiblenew additions. This process resulted in a final list of 61 items.
In the next stage of our process, we designed an online surveyusing the Qualtrics platform to gather data from participants toperform exploratory factor analysis and item reduction. Boatenget al. [4], referring to Comrey [10], recommends a sample size of aminimum of 200 participants for studies of this kind and we appliedthis guideline.
We recruited a total number of 𝑛 =
209 par-ticipants, which corresponds to the guidelines proposed by Com-rey [10]. The participants were recruited over the Amazon Mechan-ical Turk Service (MTurk) and reimbursed with 1$ . Out of theseparticipants, 109 resided in the European Economic Area and 100 We used the Qualtrics survey duration estimate, rewarded at a rate approved by theinstitution of the first author. Based on median completion time, the remunerationwas provided at a rate of USD 14 per hour. lived in the USA. We informed all participants that study partic-ipation was voluntary and if they felt uncomfortable, they couldleave at any point. We also informed them that the data collectedwould be in anonymised form. The survey was conducted onlineand could be completed in 15 minutes. The average age of the par-ticipants was 36 𝑦 ( 𝑆𝐷 = . 𝑦 ) with 33% identifying as female, 66%as male and one preferring not to fill in their gender. We asked allparticipants about their demographics and to fill out a technologyadaption scale before the survey. In order to evaluate the informative valueof our items, we selected four research prototypes in accordancewith our model of creepiness. Two prototypes include attachingtechnology to a user’s skin. While one explores opportunities forcrafting on-skin interfacing using woven materials [72], the otherlooks at the user’s hand as a part of an on-skin printed circuitboard [30]. The other works include prototypes from the domainof mobile devices: a Finger-Navi [74] integrating the smartphonewith a physical finger; and hygiene devices: a teleoperated bottomwiper [20].Each participant in the survey was randomly given a short de-scription and a representative image of exactly one prototype. Af-terwards, we asked them how much they agreed with each itemof our final list about the presented technology on a 7-item Likertscale (strongly agree to strongly disagree).
We conducted factor analysis on the survey data collected, us-ing a varimax rotation, thus replicating the method by Mejia andYarosh [46]. We expected the factors to be orthogonal in light ofthe lack of an established model of creepiness. We chose to performan orthogonal rotation as the qualitative data suggested that creepi-ness could be a result of different independent qualities. Further, thedifferent sources of creepiness present in related work suggest anindependent relationship [11]. We used parallel analysis and screeplots to determine the optimal number of factors. The examinationof the scree plot suggested an optimal solution with three factors.We then began the process of reducing the number of items. First,we removed all loadings below 0 .
30 [4]. We then removed the itemswhich loaded on multiple factors. This item list was further refined reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan by iteratively removing low loading items and optimising for inter-item reliability. We computed current and theoretical Cronbach’salpha coefficients. Our goal was to create a final scale to be as shortas possible for practical reasons—so that it could be deployed byothers—be they in industry, academia, government or other—tobe able to rapidly compare interactive technologies. The resultingstructure consisted of two items loading on one factor and threeitems loading on the other two factors. We made the non-obviousdecision to only use two items for one of the factors as the itemsloading on that factor were highly correlated and relatively uncor-related with other items. Worthington and Whittaker [80] note thattwo-item constructs are allowable in such cases. While the unevennumber of items is not desirable (due to more complicated scor-ing), this theoretical scale structure offered the best performance interms of Cronbach’s alpha for the scale, 𝛼 = .
74, and all subscales.The theoretical factor model fit also presented correct parametersat
𝑇 𝐿𝐼 = .
96 and
𝑅𝑀𝑆𝐸𝐴 = .
06. The theoretical composition ofthe scale is shown in Table 2. The percentage of variance explainedwas 67.7% and item communalities were sufficient according tothe guidelines set by Hair et al. [A]. The proposed factor structurematched our conceptual model.
Having built a proposed scale with a theoretical underlying factormodel, we proceeded to evaluate the PCTS. We first conductedConfirmatory Factor Analysis to verify the underlying model. Next,we conducted a series of tests to check the scale’s construct validityand reliability.
We recruited 𝑛 =
100 participants over MTurkfollowing the first survey. The reimbursement was 0.8$ and thestudy was conducted online. The study took 5 minutes to complete.The average age of the participants was 34 . 𝑦 ( 𝑆𝐷 = . 𝑦 ) , 30%identified as female and 70% as male. In order to evaluate the scale, we created twovideos of different methods of logging into a computer. One methodwas typing a password by hand using a keyboard. For the secondmethod, we used an EEG device and participants were told that theuser authenticates with their brain waves. Both methods are shownin Figure 3. We randomly presented each participant with one ofthe two videos. Afterwards, we asked them how much they agreedwith each item of our final list about the presented technology ona 7-item Likert scale (strongly agree to strongly disagree).
Up to this point, the structure of our scale was only theoretical,i.e. it had not been validated. As a first step in the evaluation ofour scale, we conducted Confirmatory Factor Analysis (CFA). Thisanalysis enabled us to conduct a test of dimensionality, which coulddetermine the correctness of our proposed factor model. We used athree-factor model with the latent variables defined as in Table 2.We obtained a fit, which conformed to the required criteria [4] with
𝑇 𝐿𝐼 = .
02 and
𝑅𝑀𝑆𝐸𝐴 < .
05. This suggests that the scale wasinternally consistent. Additionally, the model showed moderate tohigh correlations between the subscales, showing that the overall
Figure 3: The two different conditions for the scale evalua-tion. A user entering a password (top) via keyboard and au-thenticating using their brain waves (bottom). construct of creepiness as proposed was valid. The CFA model isshown in Figure 4.
Next, we examine the construct validity of the PCTS. We decidedto test the scale in two ways. First, we checked if the scale waseffectively differentiating between ‘known groups’, i.e. interactivetechnologies that differ in creepiness. Second, we investigated ifthe scale was different from possibly related concepts measured inother questionnaires.
Boateng et al. [4] listedcomparison between ‘known groups’ as a method of establishingconcept validity. Mejia and Yarosh [46] also used this method. In ourwork, we conducted a comparison between a system known (albeitqualitatively) to be creepy in the literature [48]—an EEG system,and a conventional solution with which the users were familiar—the keyboard. We hypothesised that logging in with EEG wouldbe significantly more creepy than logging in with only a keyboard.Shapiro-Wilk tests revealed that the samples were not normallydistributed. Thus, we applied non parametric statistics. Table 3shows Mann-Whitney U test results for PCTS and its subscales.
Discriminant validity refers to how ascale measures concepts that are different from other scales. Giventhe conceptual model behind building the PCTS, we wanted tocheck if creepiness was not simply a reflection of social acceptabil-ity or anticipated usability. As such, a comparison is only possible
HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al.
Table 2: The reduced, eight-item Perceived Technology Creepiness Scale (PCTS). The reported Cronbach’s alphas and factorloadings were calculated using the data from the exploratory survey.Subscale/Item Factor LoadingImplied Malice, 𝛼 = . Undesirability, 𝛼 = . Unpredictability, 𝛼 = . Table 3: Scale evaluation through differentiation by known groups for PCTS. Non-parametric tests show that logging in via EEGwas significantly more creepy than using only the keyboard using the full scale and the subscales. Table reports Bonferroni-corrected p-values.
Scale/Subscale 𝑀 𝐾𝑒𝑦𝑏𝑜𝑎𝑟𝑑 𝑆𝐷 𝐾𝑒𝑦𝑏𝑜𝑎𝑟𝑑 𝑀 𝐸𝐸𝐺 𝑆𝐷 𝐸𝐸𝐺
𝑈 𝑝
PCTS 22.24 13.62 35.12 7.80 1978 . < . . < . . < . . < . Figure 4: The factor model for PCTS with the three correlated subscales resulting from confirmatory factor analysis. Note thatthe graph users inverse scores for reverse-scored items, thus all correlations are positive. reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan with other validated questionnaires, and so our choice of alterna-tive concepts was limited. So, we decided to investigate if the PCTSprovided measures different from the dimensions of the TechnologyAcceptance Model, as measured by a highly-cited questionnaire byPark [58]. This questionnaire featured a number of factors whichcould be potential confounding concepts for the PCTS: perceivedease of use (PE), perceived usefulness (PU), Attitude (AT) and Be-havioural intent (BI). Furthermore, we ensured that the PCTS mea-sured properties of the artefact and not the user’s personality interms of attitudes towards technology. To this end, we comparedPCTS scores with McKnight et al.’s Propensity to Trust in GeneralTechnology (PTT, [45]). We computed Spearman correlations be-tween the different scales. The results, shown in Table 4, show atmost medium to low correlations between the PCTS or its subscalesand the other measurement instruments. The medium correlationsuggests that some of the instruments may be measuring contextualfactors related to the PCTS. There might also be an overlap betweensome of the dimensions of the PCTS and the dimensions developedby Park [58], which does not impact the overall scores of the PCTS.These results show that the PCTS is a novel concept and validatesPCTS’s underlying model.
As a final evaluation of the scale, we tested its temporal stability,i.e. whether the scale can produce reliable results at different timepoints. To this end, we administered the PCTS to a group of 𝑛 = 𝑀 = . 𝑦, 𝑆𝐷 = . 𝑦 , 15 male and 5 female,twice, with a minimum 14-day break in between the studies.There is a lack of consensus in the literature about how long thetime between the two surveys should be. We decided to replicateMejia and Yarosh’s [46] approach, albeit with a larger participantsample. In an online survey, the participants were asked to ratean artefact previously qualitatively reported to evoke feelings ofcreepiness—the HEXBUG [71]. Contrary to the previous studies,we used snowball sampling and social media posts to recruit theparticipants. This allowed us to ensure we could reach participantseffectively to ask them to conduct the survey for the second time.Boateng et al. [4] list a number of ways to assess test-retestreliability. We chose to compute the intra-class correlation coef-ficient [34] to investigate the relationship between the two mea-surement moments. There was a high reliability with 𝜅 = . 𝑝 < . .
65 to 0 . In this section, we provide the necessary details for administeringthe PCTS as well as information on how to analyse the results. Inaddition, we discuss the limitations of our approach and possibilitiesfor further development.
The PCTS is scored on a seven-point Likert scale from StronglyAgree (7) to Strongly Disagree (1). Items 6 and 8 in the scale are reverse-scored. Our scale has a 2 + + .
5. Con-sequently, the PCTS is scored as (reverse-scored items are markedwith the subscript 𝑅 ): 𝑃𝐶𝑇𝑆 = 𝑃𝐶𝑇𝑆 𝐼𝑀 + 𝑃𝐶𝑇𝑆
𝑈 𝐷 + 𝑃𝐶𝑇𝑆
𝑈 𝑃 𝑤ℎ𝑒𝑟𝑒 𝑃𝐶𝑇𝑆 𝐼𝑀 = ( 𝑄 + 𝑄 ) × . 𝑎𝑛𝑑 𝑃𝐶𝑇𝑆 𝑈 𝐷 = 𝑄 + 𝑄 + 𝑄 𝑎𝑛𝑑 𝑃𝐶𝑇𝑆 𝑈 𝑃 = 𝑄 𝑅 + 𝑄 + 𝑄 𝑅 Thus, the lowest score on the scale is 9 and the highest is 63. Higherscores indicate that the interactive artefact evokes stronger feel-ings of creepiness. This scoring offers a transparent and actionableway for designers and researchers to use the PCTS. The evalua-tion of the scale presented in this paper suggests that conductingnull-hypothesis testing using PCTS scores and its subscales scoreis permitted. We recommend checking the normality of the dataand possibly using non-parametric statistics when conducting ex-periments using the scale.
The robust structure of the PCTS allows using it for different studydesigns within HCI research as well as for quick assessments ofresearch prototypes. We particularly recommend using the PCTS inearly stages of the design process. The scale is relatively short, easyto use and can offer rapid feedback. This can help in identifyingartefacts early in the design process which appear creepy to usersand help raise awareness of how to steer the design process in amore desirable direction.We developed the PCTS primarily to capture users’ first im-pressions of experiencing an artefact. Consequently, the scale isparticularly suited to studying initial encounters with technologies,discovering new (physiological) sensing modalities or interactingwith previously unknown aesthetic forms. The scale examines anaspect of user experience beyond acceptance and usability, as in-dicated by our results. We recommend using the PCTS to identifyfeatures of artefacts that may be creepy early in the design process.Using the PCTS enables for effectively managing first impressionsof technologies and ensuring that the technology does not intim-idate the user to a point where they are unwilling to verify itsusability. PCTS can facilitate rapid selection of solutions at theprototype generation phase in the user-centred design process.In this context, future users of the scale should recognise thatcreepiness is not necessarily a negative aspect of technologies. Theprovocative or intentionally ambiguous technologies discussed inthis paper may use creepiness for the benefit of users. This sug-gests that creepiness is a highly contextualised concept. Hence, ourscale is best used when comparing between different technologieswithin the same context. Consequently, future users of the PCTS
HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al.
Table 4: Spearman correlations between the PCTS, its subscales and potential alternative questionnaires that could measurecreepiness. Significant correlation tests are marked with an asterisk.
PCTS PCTS-IM PCTS-UD PCTS-UP PTT PE PU ATPCTS-IM 0.85*PCTS-UD 0.89* 0.66**PCTS-UP 0.70* 0.43** 0.52*PTT -0.22* -0.24* -0.09 -0.32*PE -0.38* -0.31* -0.44* -0.67* 0.34*PU 0.03 0.22* -0.03 -0.19 0.20* 0.39*AT -0.38* -0.30* -0.48* -0.59* 0.31* 0.72* 0.55**BI -0.42* -0.24* -0.45* -0.50* 0.29* 0.70* 0.54* 0.86*should carefully control the context in which the participants areintroduced to the artefacts studied in order to avoid bias.The psychometric properties of the PCTS indicate that the scalecan be used for between- and within-subject studies and for repeated-measures designs. If particular aspects of a given technology areof importance, e.g. its ethical underpinnings, the subscales of thePCTS can also be analysed. However, we recommend that the useof PCTS be accompanied by pre-studies and rich qualitative datagathering. Potential users of the PCTS should be sure that the tech-nology studied may evoke creepiness in the understanding of thePCTS, i.e. an innate, hard to define feeling of unease. Here, thePCTS may be used to help the designer reflect on the potentialcreepiness. Alternative, more detailed questionnaires can be used ifthe main concern about ‘creepiness’ is indeed a question of privacyencroachment [8] or social acceptability [64]. Our results showthat the PCTS measures a concept different than usability or socialacceptability. Thus, our scale broadens the apparatus available toHCI researchers in quantitatively understanding impressions ofnew technologies.
We recognise that the development and possible use of the PCTS isprone to certain limitations. First, we made the decision to focusthe development of the scale on assessing the initial impressionsof technologies. This implies that the usefulness of the PCTS forlong-term studies is unknown. We envision that the scale couldbe used to measure how users gradually get more acquainted witha technology and their perception of it changes. This would beparticularly relevant for better understanding interactive technolo-gies with which users develop long-term relationships, e.g. voiceassistants [62]. In future research, we plan to evaluate if the PCTScan be used effectively beyond first impressions.While we used a number of recruitment strategies and studymethods, we recognise that the development of the PCTS is biasedby the participant sample used. The focus groups which highlyinfluenced our conceptual model of creepiness were conductedsolely among residents of Europe. The majority of the participantsin the studies which used MTurk recruitment in our work had aWestern cultural background. The term creepy is a modern Englishword that is difficult to translate to many languages. Consequently,we note that the PCTS is most likely only applicable to users witha selected subset of cultural backgrounds. We hope that future research can develop alternative versions of the scale for othercultural contexts.
In this paper, we presented the development and evaluation of thePerceived Creepiness of Technology Scale (PCTS). Based on a litera-ture review and focus groups, we developed a conceptual model forcreepiness. We then describe how we constructed, reduced and eval-uated the scale. We illustrated the discriminant validity of the scale,its ability to differentiate between known groups and test-retestreliability. Our scale enables designers and researchers to rapidlyascertain possible feelings of unease caused by novel interactivetechnologies. The PCTS can be used to conduct rapid comparativestudies of novel artefacts, especially ones that exhibit elements ofautonomy or feature direct contact with the body.We designed the PCTS with the goal of enabling a broader un-derstanding of how current and future technologies make us feeland how to build technologies that do not cause negative emotionsin users. We also note that our scale can help in studying possiblyprovocative artefacts that could foster engagement. We hope thatour scale can foster new research avenues into increasing our under-standings of creepiness and to enlighten those designing to avoid(or to promote) creepiness by providing them with a creepinessmetric they can easily use to conduct studies of novel technologies.
ACKNOWLEDGMENTS ´ s Excellence Strategy (EXC 2077, University of Bremen)and a Lichtenberg professorship funded by the Volkswagen founda-tion. This work was financially supported by Utrecht University’sFocus Area: Sports and Society and the European Union’s Horizon2020 Programme under ERCEA grant no. 683008 AMPLIFY. REFERENCES [1] Michelle Annett, Matthew Lakier, Franklin Li, Daniel Wigdor, Tovi Grossman,and George Fitzmaurice. 2016. The Living Room: Exploring the Haunted andParanormal to Transform Design and Interaction. In
Proceedings of the 2016 ACMConference on Designing Interactive Systems (DIS ’16) . Association for Computing reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan
Machinery, New York, NY, USA, 1328–1340. https://doi.org/10.1145/2901790.2901819 event-place: Brisbane, QLD, Australia.[2] Louise Barkhuus and Juliana Tashiro. 2010. Student Socialization in the Age ofFacebook. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems (CHI ’10) . Association for Computing Machinery, New York, NY, USA,133–142. https://doi.org/10.1145/1753326.1753347 event-place: Atlanta, Georgia,USA.[3] Ann Blandford, Dominic Furniss, and Stephann Makri. 2016. Qualitative HCI re-search: Going behind the scenes.
Synthesis lectures on human-centered informatics
9, 1 (2016), 1–115.[4] Godfred O. Boateng, Torsten B. Neilands, Edward A. Frongillo, Hugo R. Melgar-Quiñonez, and Sera L. Young. 2018. Best Practices for Developing and ValidatingScales for Health, Social, and Behavioral Research: A Primer.
Frontiers in PublicHealth
Proceedings of theEleventh International Conference on Tangible, Embedded, and Embodied Interaction(TEI ’17) . Association for Computing Machinery, New York, NY, USA, 503–509.https://doi.org/10.1145/3024969.3025083 event-place: Yokohama, Japan.[6] Kimberly A. Brink, Kurt Gray, and Henry M. Wellman. 2019. CreepinessCreeps In: Uncanny Valley Feelings Are Acquired in Childhood.
Child Devel-opment
90, 4 (2019), 1202–1214. https://doi.org/10.1111/cdev.12999 _eprint:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cdev.12999.[7] Matt Burgess. 2018. Spotify and Tinder need to stop being creepy with customerdata.
Wired UK
Proceedings ofthe Human Factors and Ergonomics Society Annual Meeting
47, 11 (Oct. 2003), 1326–1330. https://doi.org/10.1177/154193120304701102 Publisher: SAGE PublicationsInc.[9] Jacob Cohen. 2013.
Statistical Power Analysis for the Behavioral Sciences . Routledge.https://doi.org/10.4324/9780203771587[10] Andrew L. Comrey. 1988. Factor-analytic methods of scale development inpersonality and clinical psychology.
Journal of Consulting and Clinical Psychology
56, 5 (1988), 754–761. https://doi.org/10.1037/0022-006X.56.5.754 Place: USPublisher: American Psychological Association.[11] Robert F DeVellis. 2016.
Scale development: Theory and applications . Vol. 26. Sagepublications.[12] Andre Doucette, Carl Gutwin, Regan L. Mandryk, Miguel Nacenta, and SunnySharma. 2013. Sometimes When We Touch: How Arm Embodiments ChangeReaching and Collaboration on Digital Tables. In
Proceedings of the 2013 Con-ference on Computer Supported Cooperative Work (CSCW ’13) . Association forComputing Machinery, New York, NY, USA, 193–202. https://doi.org/10.1145/2441776.2441799 event-place: San Antonio, Texas, USA.[13] Kevin Dugan. 2018. Facebook’s creepy new speakers are freaking peopleout. https://nypost.com/2018/10/08/facebooks-creepy-new-speakers-are-freaking-people-out/[14] Andrew Gambino and S. Shyam Sundar. 2019. Acceptance of Self-Driving Cars:Does Their Posthuman Ability Make Them More Eerie or More Desirable?. In
Extended Abstracts of the 2019 CHI Conference on Human Factors in ComputingSystems (CHI EA ’19) . Association for Computing Machinery, New York, NY, USA,1–6. https://doi.org/10.1145/3290607.3312870 event-place: Glasgow, ScotlandUk.[15] William W. Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resourcefor design. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems (CHI ’03) . Association for Computing Machinery, New York, NY, USA,233–240. https://doi.org/10.1145/642611.642653[16] Vivian Genaro Motti and Kelly Caine. 2014. Understanding the Wearabilityof Head-Mounted Devices from a Human-Centered Perspective. In
Proceedingsof the 2014 ACM International Symposium on Wearable Computers (ISWC ’14) .Association for Computing Machinery, New York, NY, USA, 83–86. https://doi.org/10.1145/2634317.2634340 event-place: Seattle, Washington.[17] Daniel Gooch and Leon Watts. 2012. YourGloves, Hothands and Hotmits: Devicesto Hold Hands at a Distance. In
Proceedings of the 25th Annual ACM Symposium onUser Interface Software and Technology (UIST ’12) . Association for Computing Ma-chinery, New York, NY, USA, 157–166. https://doi.org/10.1145/2380116.2380138event-place: Cambridge, Massachusetts, USA.[18] Rebecca E. Grinter and Allison Woodruff. 2002. Ears and Hair: What HeadsetsWill People Wear?. In
CHI ’02 Extended Abstracts on Human Factors in ComputingSystems (CHI EA ’02) . Association for Computing Machinery, New York, NY,USA, 680–681. https://doi.org/10.1145/506443.506543 event-place: Minneapolis,Minnesota, USA.[19] J Gutman. 1982. A Means-End Chain Model on Consumer Categorization Pro-cesses.
Journal of Marketing
46 (1982), 209–226. [20] Takeo Hamada, Hironori Mitake, Shoichi Hasegawa, and Makoto Sato. 2015. ATeleoperated Bottom Wiper. In
Proceedings of the 6th Augmented Human Interna-tional Conference (Singapore, Singapore) (AH ’15) . Association for Computing Ma-chinery, New York, NY, USA, 145–150. https://doi.org/10.1145/2735711.2735794[21] Julia Hanson, Miranda Wei, Sophie Veys, Matthew Kugler, Lior Strahilevitz, andBlase Ur. 2020. Taking Data Out of Context to Hyper-Personalize Ads: Crowd-workers’ Privacy Perceptions and Decisions to Disclose Private Information. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20) . Association for Computing Machinery, New York, NY, USA, 1–13.https://doi.org/10.1145/3313831.3376415 event-place: Honolulu, HI, USA.[22] Lane Harrison, Katharina Reinecke, and Remco Chang. 2015. InfographicAesthetics: Designing for the First Impression. In
Proceedings of the 33rd An-nual ACM Conference on Human Factors in Computing Systems (CHI ’15) . As-sociation for Computing Machinery, New York, NY, USA, 1187–1190. https://doi.org/10.1145/2702123.2702545[23] Kashmir Hill. 2020. The Secretive Company That Might End Privacy as WeKnow It.
The New York Times
Proceedings of the 3rd ACM/IEEE international conference onHuman robot interaction (HRI ’08) . Association for Computing Machinery, NewYork, NY, USA, 169–176. https://doi.org/10.1145/1349822.1349845[25] Ali Israr and Freddy Abnousi. 2018. Towards Pleasant Touch: Vibrotactile Gridsfor Social Touch Interactions. In
Extended Abstracts of the 2018 CHI Conference onHuman Factors in Computing Systems (CHI EA ’18) . Association for ComputingMachinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3188546event-place: Montreal QC, Canada.[26] Alisa Kalegina, Grace Schroeder, Aidan Allchin, Keara Berlin, and Maya Cakmak.2018. Characterizing the Design Space of Rendered Robot Faces. In
Proceedingsof the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI’18) . Association for Computing Machinery, New York, NY, USA, 96–104. https://doi.org/10.1145/3171221.3171286[27] Laewoo Kang, Taezoo Park, and Steven Jackson. 2014. Scale: Human Interactionswith Broken and Discarded Technologies. In
CHI ’14 Extended Abstracts on HumanFactors in Computing Systems (CHI EA ’14) . Association for Computing Machinery,New York, NY, USA, 399–402. https://doi.org/10.1145/2559206.2574831 event-place: Toronto, Ontario, Canada.[28] Seokbin Kang, Leyla Norooz, Vanessa Oguamanam, Angelisa C. Plane, Tamara L.Clegg, and Jon E. Froehlich. 2016. SharedPhys: Live Physiological Sensing,Whole-Body Interaction, and Large-Screen Visualizations to Support SharedInquiry Experiences. In
Proceedings of the The 15th International Conference onInteraction Design and Children (IDC ’16) . Association for Computing Machinery,New York, NY, USA, 275–287. https://doi.org/10.1145/2930674.2930710 event-place: Manchester, United Kingdom.[29] Seokbin Kang, Ekta Shokeen, Virginia L. Byrne, Leyla Norooz, Elizabeth Bon-signore, Caro Williams-Pierce, and Jon E. Froehlich. 2020. ARMath: AugmentingEveryday Life with Math Learning. In
Proceedings of the 2020 CHI Conferenceon Human Factors in Computing Systems (CHI ’20) . Association for ComputingMachinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376252event-place: Honolulu, HI, USA.[30] Hsin-Liu Cindy Kao, Abdelkareem Bedri, and Kent Lyons. 2018. SkinWire:Fabricating a Self-Contained On-Skin PCB for the Hand.
Proc. ACM Interact.Mob. Wearable Ubiquitous Technol.
2, 3, Article 116 (Sept. 2018), 23 pages. https://doi.org/10.1145/3264926[31] Fares Kayali, Oliver Hödl, Geraldine Fitzpatrick, Peter Purgathofer, AlexanderFilipp, Ruth Mateus-Berr, Ulrich Kühn, Thomas Wagensommerer, Johannes Kretz,and Susanne Kirchmayr. 2017. Playful Technology-Mediated Audience Partic-ipation in a Live Music Event. In
Extended Abstracts Publication of the AnnualSymposium on Computer-Human Interaction in Play (CHI PLAY ’17 ExtendedAbstracts) . Association for Computing Machinery, New York, NY, USA, 437–443. https://doi.org/10.1145/3130859.3131293 event-place: Amsterdam, TheNetherlands.[32] Norene Kelly and Stephen Gilbert. 2016. The WEAR Scale: Developing a Measureof the Social Acceptability of a Wearable Device. In
Proceedings of the 2016 CHIConference Extended Abstracts on Human Factors in Computing Systems (CHIEA ’16) . Association for Computing Machinery, New York, NY, USA, 2864–2871.https://doi.org/10.1145/2851581.2892331[33] Marion Koelle, Swamy Ananthanarayan, and Susanne Boll. 2020. Social Ac-ceptability in HCI: A Survey of Methods, Measures, and Design Strategies. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems .ACM, Honolulu HI USA, 1–19. https://doi.org/10.1145/3313831.3376162[34] Terry K. Koo and Mae Y. Li. 2016. A Guideline of Selecting and ReportingIntraclass Correlation Coefficients for Reliability Research.
Journal of ChiropracticMedicine
15, 2 (2016), 155 – 163. https://doi.org/10.1016/j.jcm.2016.02.012[35] Markus Langer and Cornelius J. König. 2018. Introducing and Testing theCreepiness of Situation Scale (CRoSS).
Frontiers in Psychology
HI ’21, May 8–13, 2021, Yokohama, Japan Paweł W. Woźniak et al. [36] Young Suk Lee. 2017. Tea with Crows: Experiencing Proactive Ubiquitous Tech-nology by Interactive Art. In
Proceedings of the Eleventh International Conferenceon Tangible, Embedded, and Embodied Interaction (TEI ’17) . Association for Com-puting Machinery, New York, NY, USA, 677–680. https://doi.org/10.1145/3024969.3025058 event-place: Yokohama, Japan.[37] Vincent Levesque, Louise Oram, Karon MacLean, Andy Cockburn, Nicholas D.Marchuk, Dan Johnson, J. Edward Colgate, and Michael A. Peshkin. 2011. En-hancing Physicality in Touch Interaction with Programmable Friction. In
Pro-ceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11) . Association for Computing Machinery, New York, NY, USA, 2481–2490.https://doi.org/10.1145/1978942.1979306 event-place: Vancouver, BC, Canada.[38] Hajin Lim and Susan R. Fussell. 2017. Making Sense of Foreign Language Posts inSocial Media.
Proceedings of the ACM on Human-Computer Interaction
1, CSCW(Dec. 2017), 69:1–69:16. https://doi.org/10.1145/3134704[39] Chaolan Lin, Karl F. MacDorman, Selma Šabanović, Andrew D. Miller, and ErinBrady. 2020. Parental Expectations, Concerns, and Acceptance of StorytellingRobots for Children. In
Companion of the 2020 ACM/IEEE International Conferenceon Human-Robot Interaction (HRI ’20) . Association for Computing Machinery,New York, NY, USA, 346–348. https://doi.org/10.1145/3371382.3378376[40] Quantified Toilets LLC. 2014. http://quantifiedtoilets.com/.
Critical MakingHackathon at the SIGCHI Conference on Human Factors in Computing Systems (2014).[41] Diana Löffler, Judith Dörrenbächer, and Marc Hassenzahl. 2020. The UncannyValley Effect in Zoomorphic Robots: The U-Shaped Relation Between AnimalLikeness and Likeability. In
Proceedings of the 2020 ACM/IEEE International Confer-ence on Human-Robot Interaction (HRI ’20) . Association for Computing Machinery,New York, NY, USA, 261–270. https://doi.org/10.1145/3319502.3374788[42] Adrienne Matei. 2017. New technology is forcing us to confront the ethics ofbringing people back from the dead. https://qz.com/896207/death-technology-will-allow-grieving-people-to-bring-back-their-loved-ones-from-the-dead-digitally/[43] Francis T. McAndrew and Sara S. Koehnke. 2016. On the nature of creepiness.
NewIdeas in Psychology
43 (Dec. 2016), 10–15. https://doi.org/10.1016/j.newideapsych.2016.03.003[44] Rachel McDonnell and Martin Breidt. 2010. Face reality: investigating theUncanny Valley for virtual faces. In
ACM SIGGRAPH ASIA 2010 Sketches (SA’10) . Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/1899950.1899991[45] D. Harrison Mcknight, Michelle Carter, Jason Bennett Thatcher, and Paul F. Clay.2011. Trust in a specific technology: An investigation of its components andmeasures.
ACM Transactions on Management Information Systems
2, 2 (July 2011),12:1–12:25. https://doi.org/10.1145/1985347.1985353[46] Kenya Mejia and Svetlana Yarosh. 2017. A Nine-Item Questionnaire for Measuringthe Social Disfordance of Mediated Social Touch Technologies.
Proceedingsof the ACM on Human-Computer Interaction
1, CSCW (Dec. 2017), 77:1–77:17.https://doi.org/10.1145/3134712[47] Nick Merrill and John Chuang. 2018. From Scanning Brains to Reading Minds:Talking to Engineers about Brain-Computer Interface. In
Proceedings of the 2018CHI Conference on Human Factors in Computing Systems (CHI ’18) . Associationfor Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3173897 event-place: Montreal QC, Canada.[48] Nick Merrill, John Chuang, and Coye Cheshire. 2019. Sensing is Believing:What People Think Biosensors Can Reveal About Thoughts and Feelings. In
Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19) .Association for Computing Machinery, New York, NY, USA, 413–420. https://doi.org/10.1145/3322276.3322286 event-place: San Diego, CA, USA.[49] Dylan Moore, Rebecca Currano, G. Ella Strack, and David Sirkin. 2019. The Casefor Implicit External Human-Machine Interfaces for Autonomous Vehicles. In
Proceedings of the 11th International Conference on Automotive User Interfaces andInteractive Vehicular Applications (AutomotiveUI ’19) . Association for Comput-ing Machinery, New York, NY, USA, 295–307. https://doi.org/10.1145/3342197.3345320 event-place: Utrecht, Netherlands.[50] Dylan Moore, G. Ella Strack, Rebecca Currano, and David Sirkin. 2019. VisualizingImplicit EHMI for Autonomous Vehicles. In
Proceedings of the 11th InternationalConference on Automotive User Interfaces and Interactive Vehicular Applications:Adjunct Proceedings (AutomotiveUI ’19) . Association for Computing Machinery,New York, NY, USA, 475–477. https://doi.org/10.1145/3349263.3349603 event-place: Utrecht, Netherlands.[51] R.R. Murphy, D. Riddle, and E. Rasmussen. 2004. Robot-assisted medicalreachback: a survey of how medical personnel expect to interact with rescuerobots. In
RO-MAN 2004. 13th IEEE International Workshop on Robot and Hu-man Interactive Communication (IEEE Catalog No.04TH8759) . 301–306. https://doi.org/10.1109/ROMAN.2004.1374777[52] Tao Ni, Amy K. Karlson, and Daniel Wigdor. 2011. AnatOnMe: FacilitatingDoctor-Patient Communication Using a Projection-Based Handheld Device. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11) . Association for Computing Machinery, New York, NY, USA, 3333–3342. https://doi.org/10.1145/1978942.1979437 event-place: Vancouver, BC, Canada.[53] Mahsan Nourani, Donald R. Honeycutt, Jeremy E. Block, Chiradeep Roy, TahrimaRahman, Eric D. Ragan, and Vibhav Gogate. 2020. Investigating the Importance ofFirst Impressions and Explainable AI with Interactive Video Analysis. In
ExtendedAbstracts of the 2020 CHI Conference on Human Factors in Computing Systems(CHI EA ’20) . Association for Computing Machinery, New York, NY, USA, 1–8.https://doi.org/10.1145/3334480.3382967[54] OED Online. 2021.
Oxford English Dictionary . Oxford University Press.[55] Takeshi Oozu, Aki Yamada, Yuki Enzaki, and Hiroo Iwata. 2017. EscapingChair: Furniture-Shaped Device Art. In
Proceedings of the Eleventh Interna-tional Conference on Tangible, Embedded, and Embodied Interaction (TEI ’17) .Association for Computing Machinery, New York, NY, USA, 403–407. https://doi.org/10.1145/3024969.3025064 event-place: Yokohama, Japan.[56] Timothy Pallarino, Aaron Free, Katrina Mutuc, and Svetlana Yarosh. 2016. FeelingDistance: An Investigation of Mediated Social Touch Prototypes. In
Proceedingsof the 19th ACM Conference on Computer Supported Cooperative Work and SocialComputing Companion (CSCW ’16 Companion) . Association for Computing Ma-chinery, New York, NY, USA, 361–364. https://doi.org/10.1145/2818052.2869124event-place: San Francisco, California, USA.[57] Pablo Paredes, Ryuka Ko, Eduardo Calle-Ortiz, John Canny, Björn Hartmann,and Greg Niemeyer. 2016. Fiat-Lux: Interactive Urban Lights for CombiningPositive Emotion and Efficiency. In
Proceedings of the 2016 ACM Conference onDesigning Interactive Systems (DIS ’16) . Association for Computing Machinery,New York, NY, USA, 785–795. https://doi.org/10.1145/2901790.2901832 event-place: Brisbane, QLD, Australia.[58] Sung Youl Park. 2009. An Analysis of the Technology Acceptance Model inUnderstanding University Students’ Behavioral Intention to Use e-Learning.
Journal of Educational Technology & Society
Proceedings of the 2020 CHI Conferenceon Human Factors in Computing Systems (CHI ’20) . Association for ComputingMachinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376187event-place: Honolulu, HI, USA.[60] Chanda Phelan, Cliff Lampe, and Paul Resnick. 2016. It’s Creepy, But It Doesn’tBother Me. In
Proceedings of the 2016 CHI Conference on Human Factors in Com-puting Systems (CHI ’16) . Association for Computing Machinery, New York, NY,USA, 5240–5251. https://doi.org/10.1145/2858036.2858381 event-place: San Jose,California, USA.[61] James Pierce. 2019. Smart Home Security Cameras and Shifting Lines of Creepi-ness: A Design-Led Inquiry. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems (CHI ’19) . Association for Computing Machinery,New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300275 event-place:Glasgow, Scotland Uk.[62] Amanda Purington, Jessie G. Taft, Shruti Sannon, Natalya N. Bazarova, andSamuel Hardman Taylor. 2017. "Alexa is my new BFF": Social Roles, User Sat-isfaction, and Personification of the Amazon Echo. In
Proceedings of the 2017CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHIEA ’17) . Association for Computing Machinery, New York, NY, USA, 2853–2859.https://doi.org/10.1145/3027063.3053246[63] Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao,Jenny Liu, and Krzysztof Z. Gajos. 2013. Predicting users’ first impressions ofwebsite aesthetics with a quantification of perceived visual complexity and color-fulness. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems (CHI ’13) . Association for Computing Machinery, New York, NY, USA,2049–2058. https://doi.org/10.1145/2470654.2481281[64] Julie Rico and Stephen Brewster. 2009. Gestures all around us: user differencesin social acceptability perceptions of gesture based interfaces. In
Proceedings ofthe 11th International Conference on Human-Computer Interaction with MobileDevices and Services (MobileHCI ’09) . Association for Computing Machinery, NewYork, NY, USA, 1–2. https://doi.org/10.1145/1613858.1613936[65] Katja Rogers, Giovanni Ribeiro, Rina R. Wehbe, Michael Weber, and Lennart E.Nacke. 2018. Vanishing Importance: Studying Immersive Effects of Game AudioPerception on Player Experiences in Virtual Reality. In
Proceedings of the 2018CHI Conference on Human Factors in Computing Systems (CHI ’18) . Associationfor Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173902 event-place: Montreal QC, Canada.[66] Johnny Saldaña. 2015.
The coding manual for qualitative researchers . Sage.[67] Albrecht Schmidt. 2017. Augmenting Human Intellect and Amplifying Perceptionand Cognition.
IEEE Pervasive Computing
16, 1 (Jan. 2017), 6–10. https://doi.org/10.1109/MPRV.2017.8 Conference Name: IEEE Pervasive Computing.[68] Valentin Schwind, Pascal Knierim, Lewis Chuang, and Niels Henze. 2017."Where’s Pinky?": The Effects of a Reduced Number of Fingers in Virtual Reality.In
Proceedings of the Annual Symposium on Computer-Human Interaction in Play(CHI PLAY ’17) . Association for Computing Machinery, New York, NY, USA,507–515. https://doi.org/10.1145/3116595.3116596 event-place: Amsterdam, TheNetherlands. reepy Technology CHI ’21, May 8–13, 2021, Yokohama, Japan [69] William Seymour and Max Van Kleek. 2020. Does Siri Have a Soul? ExploringVoice Assistants Through Shinto Design Fictions. In
Extended Abstracts of the 2020CHI Conference on Human Factors in Computing Systems (CHI EA ’20) . Associationfor Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3334480.3381809 event-place: Honolulu, HI, USA.[70] Irina Shklovski, Scott D. Mainwaring, Halla Hrund Skúladóttir, and HöskuldurBorgthorsson. 2014. Leakiness and Creepiness in App Space: Perceptions ofPrivacy and Mobile App Use. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’14) . Association for Computing Machinery,New York, NY, USA, 2347–2356. https://doi.org/10.1145/2556288.2557421 event-place: Toronto, Ontario, Canada.[71] Sowmya Somanath, Ehud Sharlin, and Mario Costa Sousa. 2013. Integrating arobot in a tabletop reservoir engineering application. In
Proceedings of the 8thACM/IEEE international conference on Human-robot interaction (HRI ’13) . IEEEPress, Tokyo, Japan, 229–230.[72] Ruojia Sun, Ryosuke Onose, Margaret Dunne, Andrea Ling, Amanda Denham,and Hsin-Liu (Cindy) Kao. 2020. Weaving a Second Skin: Exploring Opportunitiesfor Crafting On-Skin Interfaces Through Weaving. In
Proceedings of the 2020ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS’20) . Association for Computing Machinery, New York, NY, USA, 365–377. https://doi.org/10.1145/3357236.3395548[73] Omer Tene and Jules Polonetsky. 2014. A Theory of Creepy: Technology, Privacy,and Shifting Social Norms.
Yale Journal of Law and Technology
16 (2014), 45.[74] Hiroaki Tobita. 2017. Finger-Navi: Mobile Navigation Integrated Smartphonewith Physical Finger. In
Proceedings of the 16th International Conference on Mobileand Ubiquitous Multimedia (Stuttgart, Germany) (MUM ’17) . Association forComputing Machinery, New York, NY, USA, 103–106. https://doi.org/10.1145/3152832.3157381[75] Aaron Toney, Barrie Mulley, Bruce H. Thomas, and Wayne Piekarski. 2003. Socialweight: designing to minimise the social consequences arising from technologyuse by the mobile professional.
Personal and Ubiquitous Computing
7, 5 (Oct.2003), 309–320. https://doi.org/10.1007/s00779-003-0245-8[76] Austin L. Toombs, Shaowen Bardzell, and Jeffrey Bardzell. 2015. The Proper Careand Feeding of Hackerspaces: Care Ethics and Cultures of Making. In
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15) . Association for Computing Machinery, New York, NY, USA, 629–638.https://doi.org/10.1145/2702123.2702522 event-place: Seoul, Republic of Korea.[77] Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and YangWang. 2012. Smart, Useful, Scary, Creepy: Perceptions of Online BehavioralAdvertising. In
Proceedings of the Eighth Symposium on Usable Privacy and Security(SOUPS ’12) . Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2335356.2335362 event-place: Washington, D.C.[78] Maurice Waite. 2009.
Oxford thesaurus of English . Oxford University Press.[79] Margo C. Watt, Rebecca A. Maitland, and Catherine E. Gallagher. 2017. A caseof the “heeby jeebies”: An examination of intuitive judgements of “creepiness”.
Canadian Journal of Behavioural Science / Revue canadienne des sciences du com-portement
49, 1 (Jan. 2017), 58–69. https://doi.org/10.1037/cbs0000066[80] Roger L Worthington and Tiffany A Whittaker. 2006. Scale development research:A content analysis and recommendations for best practices.
The counselingpsychologist
34, 6 (2006), 806–838.[81] Jason C. Yip, Kiley Sobel, Xin Gao, Allison Marie Hishikawa, Alexis Lim, LauraMeng, Romaine Flor Ofiana, Justin Park, and Alexis Hiniker. 2019. Laughingis Scary, but Farting is Cute: A Conceptual Model of Children’s Perspectivesof Creepy Technologies. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems (CHI ’19) . Association for Computing Machinery,New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300303 event-place:Glasgow, Scotland Uk.[82] Hui Zhang, Munmun De Choudhury, and Jonathan Grudin. 2014. Creepy butInevitable? The Evolution of Social Networking. In
Proceedings of the 17th ACMConference on Computer Supported Cooperative Work & Social Computing(CSCW ’14) . Association for Computing Machinery, New York, NY, USA, 368–378.https://doi.org/10.1145/2531602.2531685 event-place: Baltimore, Maryland, USA.[83] Vygandas Šimbelis, Anders Lundström, Kristina Höök, Jordi Solsona, and VincentLewandowski. 2014. Metaphone: Machine Aesthetics Meets Interaction Design.In