Significant Otter: Understanding the Role of Biosignals in Communication
Fannie Liu, Chunjong Park, Yu Jiang Tham, Tsung-Yu Tsai, Laura Dabbish, Geoff Kaufman, Andrés Monroy-Hernández
SSignificant Otter: Understanding the Role of Biosignalsin Communication
Fannie Liu
Snap Inc., Carnegie Mellon [email protected]
Chunjong Park
University of [email protected]
Yu Jiang Tham
Snap [email protected]
Tsung-Yu Tsai
Snap [email protected]
Laura Dabbish
Carnegie Mellon [email protected]
Geoff Kaufman
Carnegie Mellon [email protected]
Andrés Monroy-Hernández
Snap [email protected]
ABSTRACT
With the growing ubiquity of wearable devices, sensed physiolog-ical responses provide new means to connect with others. Whilerecent research demonstrates the expressive potential for biosignals,the value of sharing these personal data remains unclear. To under-stand their role in communication, we created Significant Otter, anApple Watch/iPhone app that enables romantic partners to shareand respond to each other’s biosignals in the form of animatedotter avatars. In a one-month study with 20 couples, participantsused Significant Otter with biosignals sensing OFF and ON. Wefound that while sensing OFF enabled couples to keep in touch,sensing ON enabled easier and more authentic communication thatfostered social connection. However, the addition of biosignals in-troduced concerns about autonomy and agency over the messagesthey sent. We discuss design implications and future directionsfor communication systems that recommend messages based onbiosignals.
CCS CONCEPTS • Human-centered computing → Empirical studies in HCI ; Empirical studies in collaborative and social computing ; Em-pirical studies in ubiquitous and mobile computing . KEYWORDS computer-mediated communication, biosignals, interpersonal com-munication, social connection, smartwatches, heart rate, couples
ACM Reference Format:
Fannie Liu, Chunjong Park, Yu Jiang Tham, Tsung-Yu Tsai, Laura Dabbish,Geoff Kaufman, and Andrés Monroy-Hernández. 2021. Significant Otter:Understanding the Role of Biosignals in Communication. In
CHI Conferenceon Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama,Japan.
ACM, New York, NY, USA, 29 pages. https://doi.org/10.1145/3411764.3445200
Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).
CHI ’21, May 8–13, 2021, Yokohama, Japan © 2021 Copyright held by the owner/author(s).ACM ISBN 978-1-4503-8096-6/21/05.https://doi.org/10.1145/3411764.3445200 (1a) Alice scrolls through thelist of otter animations and selects the calm otter. (2a) Bob is not actively usinghis watch.(1b) Alice sends her otter bytapping on the watch screen. (2b) Bob gets a notificationabout Alice’s otter, which hetaps to view.(1c) Alice receives feedbackthat her otter is visiting Bob. (2c) Bob sees Alice’s otteranimation on his watch.
Figure 1: A hypothetical couple using Significant Otter: Alice(left) sends her state otter to Bob (right) on her Apple Watch. a r X i v : . [ c s . H C ] F e b HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al.
Today, we rely heavily on digital technology to connect with oth-ers. Furthermore, with the global COVID-19 pandemic diminishingin-person social contact, technology-mediated communication ismore prominent than ever before [35]. However, digital commu-nication is well-known to be challenging due to limited access toimportant nonverbal cues, such as our body movements and facialexpressions [32, 60, 63].An emerging area of research in HCI has explored a novel so-cial cue for improving the way we interact over technology: ourbiosignals. Biosignals, such as heart rate and skin conductance, arewell known to change according to our physical and emotionalresponses, and can be revealed in everyday interactions using wear-able sensor technologies. For example, applications like Pulsoid orOnbeat explore this possibility through livestreams of heart rateduring gameplay or exercise. Researchers have shown that expres-sive biosignals , or biosignals displayed as a social cue, have the po-tential to facilitate communication as a means to recognize and ex-press our emotions and physical being [21, 23, 26, 40–43, 50, 55, 57].However, researchers have not yet described the role that biosignalsplay in communication. Biosignals are personal and private datathat require careful design and consideration [20, 22, 26, 41]. Inparticular, as cues that are sensed and recommended by systems,they present a new form of AI-mediated communication that couldshape our interactions in unintended ways [25]. Thus, it is crucialthat we understand the value and consequences of integrating theminto our existing means of communicating.In the present work, we expand on expressive biosignals litera-ture by demonstrating the effects of shifting from communication without biosignals to communication with biosignals. We designed,developed, and deployed Significant Otter, an Apple Watch andiPhone app that enables romantic couples to send heart rate-drivenotter animations as messages to each other. By setting adaptivethresholds for each person based on their past heart rate and mo-tion data, Significant Otter intelligently suggests animations thatmatch their current emotional and physical state. To explore thedesign of expressive biosignals as AI-mediated communication, weincorporate AI-recommended sets of shareable sensed states. In aone-month within-subjects field study, we investigate how couples’behaviors and perceptions are affected when shifting from a sens-ing OFF version of the app, with no biosignals sensed, to a sensingON version, with biosignals sensed. We present qualitative resultsfrom interviews during the study and discuss opportunities andchallenges for biosignals in communication.The core contributions of this work are: (1) Significant Otter , anovel smartwatch and phone app that promotes communication andconnection between romantic partners through animated avatarsrecommended based on heart rate; (2) an empirical study with 20couples who used Significant Otter with sensing OFF and ON thatdemonstrates the value of biosignals as a lightweight and authenticsocial cue; (3) design implications and future directions for expres-sive biosignals research, including suggestions for integration intosocial platforms as a form of AI-mediated communication. https://pulsoid.net/ and http://onbeat.fit/ Significant Otter is publicly available on the App Store at the following link:https://apps.apple.com/us/app/significant-otter-couples-app/id1450105275
For the purposes of this study, we focus on communication be-tween romantic couples. Given the intimate nature of physiologicaldata [26], people feel most comfortable sharing them with closeothers [41], who may also be the most interested and equipped tounderstand them as limited contextual cues. For instance, couplescan interpret work breaks or distance from home based how manysteps their partner has made [14]. In lightweight communication,defined by quick exchanges [8] through minimal interaction orcontent generation, even minimal messages between close partnerscan convey meaning like “thinking of you” [7, 28]. Thus, we targetthe closest partners: significant others.A breadth of HCI research has explored technologies that cansupport significant others, including those that integrate biosignals.In their review on technology-mediated intimacy, Hassenzahl andcolleagues described different strategies for supporting importantaspects of intimacy [16]. For example, physicalness represents thephysical aspect of intimacy, and has been simulated through medi-ated touch [15] and gestures [12, 52], as well as feeling someoneelse’s heartbeat [64]. Expressivity describes expressing feelingsthrough a language unique to the couple, such as mutual affec-tion through “on-off” signals [29] or couple-specific symbols [36].Awareness of one’s partner has been explored in systems that dis-play a partner’s presence, activities, and mood through availabil-ity [6, 10] or sensed contextual information like location [2, 66], mo-tion [3, 67], and heart rate [17], or a combination of these data [14].With the integration of biosignals, Significant Otter similarlyincorporates physicalness, expressivity, and awareness to supportintimate communication. Significant Otter can simulate physical-ness through shared heart rate representing the body’s physicalstate. It can support expressivity by providing an emotional lan-guage for couples through otter animations embedded with heartrate, which communication partners can use to create emotionalmeaning together [41]. Finally, the app’s heart rate animations canenable awareness by providing contextual cues that display pres-ence, activities, and mood [17, 42]. We describe the full SignificantOtter system in Section 3.
Hancock and colleagues define AI-mediated communication as “me-diated communication between people in which a computationalagent operates on behalf of a communicator by modifying, aug-menting, or generating messages to accomplish communication orinterpersonal goals.” They suggest that AI-mediated communica-tion systems may have important effects on interpersonal dynamics,including self-presentation and disclosure, and subsequently, mean-ingful and intimate relationships [25]. As a relatively new area ofresearch, much of the AI-mediated communication work has fo-cused on text, such as AI-recommended wording in emails [4] andAI-generated profiles on sites like AirBnB [25]. The present workexpands on this research by exploring AI-mediated communicationthrough expressive biosignals. ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan
Expressive biosignal systems recommend a user’s current stateas part of interpersonal communication. The recommendation canbe used to augment communication, such as by providing emotionalcontext for text messages [17, 41] and joint activities in mixed re-ality [55], or to generate new messages in communication, suchas emoji-like animations [42]. Like other forms of AI-mediatedcommunication, AI-recommended states through biosignals couldimpact key aspects of communication. For example, in their deploy-ment of Ripple, a shirt that displayed a wearer’s skin conductance,Howell and colleagues found that people granted the system highdegrees of authority over their feelings and, therefore, the feel-ings they conveyed to others [22]. On the other hand, Liu andcolleagues showed that some people may strongly disagree withan AI-recommended state, and subsequently fail to use the systemto communicate meaningfully with others [42]. To address theseissues, we explore the design of an expressive biosignal systemwith a lower level of autonomy , i.e., the degree of control it has overmessages [25]. Specifically, we explore communication in whichpeople choose between shareable states suggested by the AI, ratherthan the AI providing only one possible state. To understand theeffects of biosignals-based recommendations, we compare people’sperceptions of their communication when they can share from a setof random versus sensed states, described in more detail in Section 3.
Existing literature on expressive biosignals have primarily exploredtheir potential for supporting how we communicate and connectwith each other. Following the interaction model of communica-tion [65], these works suggest that biosignals can support the keystages of communication: sending a message, receiving and un-derstanding that message, and responding to it with feedback. Bysharing their biosignals as a message, a sender can express bothemotions and daily activities, such as texting one’s heart rate to con-vey feeling down or taking a walk [41]. Upon viewing the sender’sbiosignals, a receiver can become aware of the sender’s state. Has-sib and colleagues showed that when accessing someone’s heartrate on a mobile messaging app, people can recognize when thatperson is angry or on their way home [17]. A recent controlledstudy also showed that biosignals increase emotional perspective-taking, or imagining someone else’s emotions, in the context ofa narrative story [43]. Receivers may subsequently respond withfeedback based on their understanding of the sender’s emotions.In prior studies where people shared their biosignals in conversa-tion or sporadically during the day, receivers often acknowledged,provided support for, or discussed the meaning of the sender’sbiosignals [21, 22, 42].Expressive biosignals may also impact social connection, or “aperson’s subjective sense of having close and positively experi-enced relationships with others in the social world” [56]. Accordingto Slovák and colleagues, expressive biosignals may promote con-nectedness between people in two ways. First, they suggest thatexpressive biosignals are a form of emotional self-disclosure, asthey can represent our internal emotional reactions during personalexperiences [57]. Self-disclosure is crucial for people to connectwith each other, where it can improve the quality of interactions and closeness in relationships [1, 39]. Second, Slovák and colleaguessuggest that biosignals indicate a person’s physical being as a repre-sentation of the daily physiological workings of our heart and otherorgans, thereby creating feelings of presence [57], which can lead tofeelings of connectedness [24]. For instance, Howell and colleaguesshowed that listening to someone’s heartbeat on a bench can elicita sense of being alive and connected to another living person [23].Moreover, Liu and colleagues suggest that remote interactants canfeel present with each other when sharing their biosignals oversmartwatches [42].Although these prior works suggest the potential for biosignalsto support communication and connection, they have not illustratedthe value of expressive biosignals. In particular, their emotionallyexpressive ability may already be achieved verbally and nonverballythrough emojis and stickers [44, 59]. Since expressive biosignalscan elicit concerns around privacy [41, 57], cognitive load [40], andaccuracy [22, 42, 47], it is crucial that we understand the valuethey add to existing modes of expression. Research suggests severalpossibilities for expressive biosignals to improve how we interacttoday, drawing from their potential to validate feelings as “objec-tive” cues [22, 41, 55]. In the different stages of communication,biosignals sent as a message could be a more vivid way to expressourselves emotionally, understand those expressions directly fromthe body, and subsequently provide improved feedback. As vividemotional expressions from our bodies, they may be perceived asmore authentic and intimate disclosures of our internal experiences,leading to greater feelings of connection. We explore these possi-bilities by comparing communication with and without biosignals.To our knowledge, only a few studies have compared the pres-ence and absence of biosignals in social contexts [9, 26, 43, 46].However, these works did not test in real-world dyadic communica-tion, instead focusing on perceptions of a target other in controlledlaboratory settings. We address this gap through a field study withcouples who used two versions of Significant Otter, an expressivebiosignals app. Specifically, we investigate communication and so-cial connection between couples who shift from a sensing OFFversion, with no biosignals, to sensing ON, with biosignals.
RQ1:
How does shifting from sensing OFF to sens-ing ON affect the stages of communication (sending,understanding, responding) between couples?
RQ2:
How does shifting from sensing OFF to sensingON affect social connection between couples?We designed Significant Otter based on prior expressive biosig-nals systems tested in everyday contexts [41, 42]. Like these works,we focus on heart rate due to its wide availability on consumer-grade wearables compared to other biosignals, representing the dataas an animated avatar. However, unlike prior systems, SignificantOtter recommends a set of heart rate-driven avatars, rather than asingle number or avatar, in order to further explore biosignals inAI-mediated communication. We detail our system in the followingsection.
Significant Otter is an Apple Watch and iPhone app that enablestwo people to send animated otter characters to each other basedon their biosignals. Each person has an animated otter that reflects
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. their inner state, which they can send to their partner. We designedSignificant Otter to provide a playful way for couples to commu-nicate. The app has been publicly available since November 2019,and over 59 thousand people have installed it as of January 2021.To investigate our research questions, we created two study ver-sions of the Significant Otter app: sensing ON and sensing OFF,where biosignals are either sensed or not sensed, respectively. Withsensing ON, people can send otter animations from a list of sensed states, suggested based on their biosignals. With sensing OFF, peo-ple can send otter animations from a list of random states, randomlyselected by the system. For both versions, the app prompts peopleto name their otter and pair with the partner. Sensing ON requirespeople to accept HealthKit and Motion & Fitness permissionsto access sensed heart rate and activity data from the watch. Peo-ple can then view their otter on their watch or phone, and scrollthrough the list of animated states to send one to their partner. Wedeveloped Significant Otter as a watch-first app, since smartwatchescan be an unobtrusive and lightweight platform for communicatingbiosignals [42], but we included a phone version due to Apple’swatch app requirements at the time. We ran two pilot studies as initial tests for Significant Otter. Thefirst pilot tested people’s understanding of the sensing ON version,to ensure that people would recognize that their heart rate is sensedand tied to the animations. The pilot included seven couples (em-ployees of a technology company and their significant others) whoused the app freely for one week and were asked about their usageand perceptions of the app. Based on their responses, we iteratedon the app to improve its usability. For instance, we determinedthe final list of animations based on the pilot results, which sug-gested that we should include states that people typically relateto heart rate (e.g., exercise), as well as limit the number of avail-able animations for usability. The second pilot tested transitioningfrom sensing OFF to sensing ON. We ran this pilot for three weekswith three couples (employees of the same technology companyand their significant others). The results confirmed that peopleviewed sensing ON as a feature update that included biosignals,and provided initial insights that informed the development of thestudy materials. We also ran this pilot during the early stages of theCOVID-19 stay-at-home orders for many states in the United States.Stay-at-home orders required that people stay in their residencesexcept for essential trips, such as for daily food and supplies orif they were essential workers (e.g., life-sustaining occupations,including employees in healthcare, food retail, and public trans-portation). The pilot informed ways to address possible COVID-19circumstances that would affect the study. The final versions of thestudy app and study design are described in the following sections. Study versions of Significant Otter were not publicly available. Public users were notincluded in our study. HealthKit is Apple’s framework for health and fitness data, such as from recordedvital signs and workouts. This refers to Apple’s Core Motion framework for motion data, such as accelerometerand gyroscope data.
People can send two types of animations to their partner: states and reacts . People 0can send states to initiate communication, anduse reacts to respond to their partner’s states. The study versionsof the app contain a subset of the animations available in the publicversion, in order to focus on states that could be interpreted frombiosignals.
Significant Otter presents an interpreted representation for biosignals: animated avatars that correspond todifferent emotional and physical states. That is, the system deter-mines an interpretation for a user’s heart rate by mapping it tomultiple possible states, as opposed to presenting raw biosignalsdata (e.g., a heart rate number) [17]. We made this decision basedon prior work [41], which shows that raw data is less engaging andrequires additional contextual clarification that may not be feasibleon a lightweight smartwatch communication app.There are four types of state animations: emotions, activities,greetings, and affection. We chose emotion and activity anima-tions according to expressions through biosignals shown in priorwork [41]. We included greeting and affection animations to rep-resent minimal expressions of mutual affection [16], which priorwork suggests are common use cases (“hello,” “thinking of you”) onsimilar apps [42]. To limit the number of available states accordingto our pilot results, we included only a few greeting and affectionstates such that we could cover a sufficient number of sensed states(emotions and activities) to address our research questions.
Emotions.
These include excited, angry, calm, sad, and neutralotter animations. We chose these states to represent each quad-rant of the valence-arousal model of emotion [53]. The sensing ONversion senses these states using heart rate data extracted fromHealthKit. Since valence cannot be determined from the heart ratedata, the system suggests states according to arousal levels deter-mined from the data [11]. For instance, excited and angry statesare available when people are in high arousal, while the calm andsad states are available when they are in low arousal. SignificantOtter determines different ranges of heart rate based on people’shistorical data, including their min, max, walking, and resting heartrates from HealthKit, which Apple updates daily. The ranges areshown in Figure 3, and were determined through empirical testingwithin the research team.
Activities.
These animations represent daily activities, includ-ing eating, sleeping, walking, running, and exercising. Eating andsleeping are time-based activities, inferred from heart rate changesduring specific times. Eating is detected based on common mealtimes in the US (11AM-2PM and 5PM-8PM) [38] and neutral orhigh arousal [54], while sleeping is detected based on commonbedtimes and hours slept in the US (10PM-8AM) [33] and low orneutral arousal [37]. Walking and running are motion-based ac-tivities, classified by Apple’s Core Motion . Exercising is detectedsolely from heart rate changes, and is presented during high or veryhigh arousal. Since Apple already provides activity detection for walking and running, we did notuse heart rate data to sense those states. However, participants perceived the sensedstates as tied to heart rate. ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (a) Emotions (sad) (b) Activities (sleeping) (c) Greetings (waving) (d) Activities (sleeping)
Emotions (heart rate) Activities Greetings Affection excited eating (time) waving hugginghappy sleeping (time) handholdingangry walking (motion)sad running (motion)surprised exercise (heart rate)boredneutral (e) Table of states.
Figure 2: Examples and table of otter states.Figure 3: Heart rate sensing based on people’s historical data from Apple HealthKit, with thresholds determined throughempirical testing. The thresholds are relative to being stationary, according to prior work in emotion detection using physio-logical data [11].
Greetings.
This category simply contains waving. This is notsensed, as people may want to greet their partner at any moment.Instead, the app rotates availability with the affection animationssuch that one can always convey either “hi” or “thinking of you” totheir partner.
Affection.
This category shows animations where the couple’stwo otters are interacting, including hugging and holding hands.These are not sensed, as people may want to show affection at anymoment. Instead, these animations randomly rotate with greetings,as described above.
We included 14 different reacts to covera variety of responses people could have to their partner’s state. Significant Otter initially had 22 react animations, which weredesigned based on social support literature and existing react sys-tems [61]. For the former, we focused on emotional support, orexpressing caring and concern, as other types of support typicallyrequire more information [5, 27, 45], which would not be suitablefor a lightweight platform. Reacts are not sensed in order to explorepeople’s decisions around responding to their partner’s states.Based on feedback from the first pilot study, we reduced thenumber of reacts to increase usability. We ran a survey on Mechani-cal Turk to understand how people perceive the react animations inorder to select the most relevant ones. 45 participants interpretedcharacteristics of each react (e.g., extent of emotional support itprovides), gave an example text message that would prompt them
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. (a) Acknowledgement (thumbs up) (b) Caring (pat on the back) (c) Follow-up (question)
Emotions Acknowledgement Caring Follow-up excited thumbs up hugging questionhappy nodding handholding call meangry lovesad pat on the backsurprisedbored (d) Table of reacts.
Figure 4: Examples and table of otter reacts. to use it as a response, and wrote the response that they believedit conveyed in words (see supplemental materials). We selectedthe final set of reacts to cover diverse possible responses to theSignificant Otter states. This included emotions (similar to states),acknowledging the sender’s state or receipt of their state (e.g., “Iagree” or “OK”), showing caring and affection for the sender (e.g.,“I’m here for you” or “I love you”), and indicating a desire for follow-up on a different platform (e.g., “Call me ASAP”). We removedreacts that elicited interpretations that did not suit the availablestates or were overly ambiguous.
People can share their otter state through the main screen of Sig-nificant Otter on their watch or phone. People can enter the mainscreen through notification prompts to view their otter (explainedin section 3.5 below), or by opening the app on their own. On thewatch, people can open the app through the app complication ontheir watch face, which will show an otter icon, similar to an emoji,of one of their currently available states. The icons are designed toact as short-form representations for the larger animations, suchthat people can glance to see their otter state without opening theapp. After opening the app, people can simply tap on their otter tosend it to their partner, and their partner can view the otter’s ani-mated state on their own device (see Figure 1). People can scroll toview and send other possible states, using the Apple Watch crownor swiping up/down on the phone. Complications are widgets for the Apple Watch watch face that display informationabout an app.
We show multiple shareable states due to limitations in detectingemotions from signals available on the watch (e.g., low granularityof heart rate, determining valence), as well as recommendationsfrom prior work [42] to explore systems that collaborate with theuser to determine their state. By providing a limited set of otherpossible states, people can reflect on their subjective emotions along-side the app’s suggestions, and then select one of the recommendedstates. Therefore, the system has low autonomy [25], where thestates that people send are partially automated and partially deter-mined by the user. At least one affection or greeting state is alwaysincluded in the sensed state list, as described above. With sensingOFF, the list is restricted to two to five random states to match thepossible sizes of the sensed state list. The states are randomizedevery 10 minutes to match the frequency of sensing states withsensing ON.
When people receive their partner’s otter, they have the option toreact within the app. After they view their partner’s state animation,the app will enter the react mode, where they can scroll through all14 possible reacts (using the watch crown or swiping up/down onthe phone) and tap on one to send it as a response to their partner.We include all possible reacts in order to understand people’s pref-erences and decisions in reacting with certain animations. Peoplecan also react through “quick reacts,” selecting from one of fourpossible reacts shown in the notification they receive to respondwithout opening the app. Quick reacts are only featured on thewatch and are primarily included for usability, as a lightweight wayto respond without viewing the full animation. The four available ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (a) Bob is presented with a set of reactotters to react to Alice’s state otter. (b) Bob scrolls through the list of reactotters to select one. (c) Bob taps to send his selected reactotter to Alice. (d) The app confirms that Bob’s reactotter was sent to Alice.
Figure 5: Bob reacting to Alice’s otter on the Apple Watch. (a) Partner’s state otter visit with quickreacts (state pictured: sad) (b) User’s own state otter notification,which comes periodically during the day. (c) Opening (b) with sensing ON willshow a sensed state icon. (d) Opening (b) with sensing OFFshows a randomly available state icon.
Figure 6: Significant Otter Notifications. quick reacts are fixed and selected based on the most frequentlyused reacts in the public version of the app (love, nodding, hand-holding, and hugging). Finally, people can choose not to react byselecting “Don’t react” in the app after viewing the animation, ordismissing the notification. Reacts are the same between both studyversions of the app in order to explore potential differences in howpeople react to their partner’s sensed or randomized state. Peoplecannot react to a react animation. After a user views a react, theapp will return to the main screen with the list of states.
People are notified when their part-ner sends them their state otter. The notification includes an ottericon representing their partner’s state otter as well as the four quickreacts (see Figure 6(a)). We used icons for the notifications such thatpeople can glance at the state they received and dismiss or quickreact, if desired. Tapping on the icon opens the app to play theirpartner’s state animation, and then enter the in-app react mode.
People are notified when their partnerreacts to a state that they sent. The notification includes an iconrepresenting their partner’s react otter. Tapping on the icon opensthe app to play the animation of the react otter, alongside an iconrepresenting the state to which their partner is reacting.
People are notified periodically duringthe day (at least 45 minutes apart, to minimize invasiveness) with a“Message from your otter” notification (see Figure 6(b)). Openingthe notification shows an icon of an available state animation. Weinclude these notifications to encourage people to share their cur-rent state with their partner. Notifications are time-based and thusappear regardless of whether the user’s state changed. This is dueto limitations of the Apple Watch, which does not enable real-timeheart rate sensing (outside of a fitness tracking session) necessaryfor recording in-the-moment state changes. This also helped tocontrol differences in the notifications between the two versions ofthe app, as the app with sensing OFF cannot record state changes.
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al.
Table 1: Participant couples. participants genders age relationship length married cohabitating US state
P1 P2 F M 36 36 1-3 years No No NYP3 P4 F M 25 38 1-3 years No Yes MDP7 P8 F M 22 27 1-3 years No Yes CAP9 P10 M F 24 24 1-3 years No No CAP11 P12 M NB 24 27 1-3 years No Yes INP13 P14 M F 32 37 > 6 years Yes Yes CAP15 P16 M F 22 20 1-3 years No Yes GAP17 P18 M F 25 25 1-3 years No No CAP19 P20 M F 33 30 > 6 years Yes Yes AZP21 P22 M F 49 51 4-6 years Yes Yes ORP23 P24 M F 33 35 > 6 years Yes Yes TXP25 P26 M F 20 20 4-6 years No No MAP27 P28 M F 26 28 1-3 years No No CAP29 P30 F M 24 26 1-3 years No No VAP31 P32 F M 32 27 1-3 years No No CAP33 P34 M F 22 21 1-3 years No No ALP35 P36 M F 19 18 11 months No No MO / IAP37 P38 F M 29 31 > 6 years Yes Yes TXP39 P40 F F 33 37 1-3 years No Yes TXP41 P42 M F 25 26 > 6 years No Yes AZ
Tapping on the icon allows the user to view the animation in theapp. People can also directly send from the notification using the“Share” button. With sensing ON, the available state is randomlyselected from the list of sensed states (not including greetings oraffection). Sensing ON also shows a heart icon to indicate that thestate is selected from the sensed list. With sensing OFF, the availablestate is randomly selected from the list of random states.
To test Significant Otter, we deployed the sensing OFF and ONversions of the app in a one-month field study.
Recruitment took place from late March to early April 2020, duringthe COVID-19 stay-at-home orders for many US states. We recruited21 romantic couples; however, one couple was removed at the startof the study due to failing to meet the minimum participationrequirement (explained in the Procedure below). This left a totalof 20 couples (N=40). Participants were recruited through Redditposts on the SampleSize and AppleWatch subreddits, recruitmentposts for about 30 US cities on Craigslist (including major cities indifferent areas of the US to account for differences in state responsesto COVID-19), and snowball sampling (social media posts).Participants took a screening survey to ensure that they metthe study requirements. This included being in an exclusive ro-mantic relationship, living in the US, being able to participate inonboarding and interview sessions via video call, and owning anApple Watch Series 3 or above that they used for at least two weeks,to ensure familiarity with the watch. Participants described usingtheir Apple Watch for a variety of reasons, predominantly fitnesstracking, but also music, news, weather, short texting, and checkingnotifications. The screening survey also included several questionsabout participants’ circumstances concerning COVID-19, including their living situation with their partner. Results from the secondpilot study suggested that people are less likely to engage withthe app when collocated with their partner (e.g., if they were bothworking from home); therefore, we recruited couples who wereliving apart or living together with one or both of them spendingmost of their time elsewhere.The couples we recruited were diverse in several dimensions,including their backgrounds, careers, demographics, and length ofrelationship. About half of the couples were not living together,including one couple in a long-distance relationship. The rest ofthe couples were living together with at least one person work-ing outside as an essential worker. Table 1 summarizes the demo-graphic information per couple. Unfortunately, our sample was notdiverse in sexual orientation, where most couples were heterosex-ual. 13 participants identified as Hispanic, Latino or Spanish, 11as White/Caucasian, 9 as Asian, 4 as Black/African-American, 1as Native Hawaiian or other Pacific Islander, 1 as Asian/Hispanic,and 1 as Biracial. Participants had various jobs, including students,healthcare professionals, personal trainers, technicians, restaurantworkers, and analysts. Six were unemployed or furloughed due toCOVID-19.
We conducted a one-month study deploying Significant Otter inthe wild with couples who were Apple Watch users. All partici-pants used the app with sensing OFF for the first two weeks andwith sensing ON for the latter two weeks. We intentionally didnot counterbalance the order of the versions, as our research ques-tions focused on the shift from the status quo of communicatingwithout biosignals to communicating with biosignals. Moreover,the removal of “sensing” as a feature in a counterbalanced studycould disrupt participants’ mental model of the app, as opposed ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan to a feature update when switching from sensing OFF to ON. Thestudy consisted of the following sessions:
Each couple completed a 30 minuteonboarding session with one of the researchers over a video call.During this session, participants installed Significant Otter withsensing OFF on their iPhone and Apple Watch through Apple’sTestFlight and added the app as a complication on their watch.During the installation, participants completed a short question-naire about their background and relationship with their partner.Then, we instructed them on using the app, and asked them to testit during the session to ensure that it was installed and workingcorrectly. Participants could freely useSignificant Otter as little or as much as they wanted during thestudy. In order to capture participants’ perceptions of the app andbehaviors throughout the study, we asked participants via emailto complete brief daily surveys about how they used the app thatday (see supplemental materials). The first daily survey includedcomprehension questions to ensure that participants understoodhow to use Significant Otter. Given the time and effort involved infilling out a survey every day, we required participants to fill out aminimum of three daily surveys per week.
Two weeks after their onboarding ses-sion, each participant individually completed a 30-60 minute mid-study session over video call. Before the session, participants com-pleted a mid-study questionnaire similar to the onboarding ques-tionnaire, with the addition of questions about COVID-19-relatedchanges they experienced since the start of the study. During thecall, we conducted a semi-structured interview about participants’experiences with the sensing OFF version of the app (see supplemen-tal materials). The interview included questions about participants’overall thoughts and perceptions about the app and its differentfeatures (e.g., reactions to the notifications), and how they used theapp with their partner (e.g., when and why they sent their own otter,what they thought of their partner’s otter and how they responded,if at all). To help participants recall their experiences, we showedGIFs of the top states and reacts that they sent and received. Aftercompleting the interview, participants installed and were giveninstructions on the sensing ON version of the app. We explainedthat with sensing ON, the app displays state animations based onheart rate using Health data from the Apple Watch . After the ses-sion, participants were given a $75 Amazon gift card as mid-pointcompensation. Two weeks after their mid-study session, eachparticipant individually completed a 30-60 minute exit session overvideo call. Before the session, participants completed a question-naire identical to the mid-study questionnaire. During the call, weconducted a semi-structured interview about participants’ experi-ences with the sensing ON version of the app. The interview wassimilar to the mid-study interview, with the addition of questionsabout how they understood and perceived sensing in the app (see TestFlight is an online service for deploying apps to testers. In the session, we stated: “Version 2 will try to sense your state using your heart rateand other contextual data. The state otters that you’ll be able to see and send to yourpartner will be based on this sensing.” supplemental materials). At the end of the interview, we asked par-ticipants to uninstall Significant Otter. Since multiple participantsexpressed interest in continuing to use the app, we provided linksto the public version, which participants could freely download.After the session, participants were compensated with another $75Amazon gift card for completing the study.
During the study,we made updates to the app and some study materials to addressissues that emerged as participants used the app. Participants in-stalled bug fixes by uninstalling and reinstalling the app throughTestFlight. We also adjusted how we explained the sensing ONversion and added related comprehension questions to the dailysurveys, since some participants expressed confusion around howsensing worked (e.g., whether opening the app triggered sensing).Additionally, several participants had issues receiving the app noti-fications, where they may not have received any during the firstor second half of the study. This is a limitation of the Apple WatchOS, which restricts when and how often apps can send push noti-fications. Since we were unable to address this issue, participantsreceived notifications at different frequencies.
We analyzed transcripts of the mid-study and exit interviews usinga grounded theory approach [58], focused on how participants’perceptions and behaviors shifted between the two versions of theapp. First, we segmented the transcripts into high-level categoriesaccording to our interview protocol, which highlighted the dif-ferent aspects of communication (e.g., content of what they sent,thoughts about their partners’ sent state, feedback provided, etc.).This enabled us to analyze similar concepts together. Categorieswere the same for the mid-study and exit interviews, with the ad-dition of categories for the exit interview specific to sensing (e.g.,how sensing worked) and external factors that affected usage (e.g.,novelty effects, COVID-19-related changes). Next, we developedopen codes for each category based on a subset of transcripts, label-ing them according to similarities in participants’ responses. Threecoders validated the subsequent codebook by independently codinganother subset of transcripts, meeting frequently to discuss thecodes and ensure high inter-rater reliability. They achieved fuzzyFleiss’ kappas [34] above 0.7. After validating the codebook, thethree coders divided and coded the rest of the segments. We thenperformed axial coding by grouping similar codes together andanalyzing them to form cross-cutting themes. Finally, we refinedthese themes according to our research questions.
In this section, we describe how participants used Significant Otterduring the study and provide detailed insights around the reasonsbehind their usage for both versions of the app. Overall, participantssent a total of 2,474 states and 987 reacts during the study. For bothapp versions, we observed novelty effects in the first week afterinstallation (weeks 1 and 3) . Participants used the app daily evenafter novelty effects wore off, sending an average of 1.66 states and0.71 reacts per day with sensing OFF (week 2) and an average of 1.54 In week 3, participants described using the app more due to curiosity around thenew sensing feature.
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. states and 0.54 reacts with sensing ON (week 4). There was a slightnon-significant drop in usage from sensing OFF to sensing ON,which participants attributed to COVID-19-related changes to theirlife circumstances (e.g., changes in their work schedule or ability tomeet their partner). Despite using the sensing ON version less, 30out of 40 participants preferred it over sensing OFF for enhancingtheir ability to communicate and connect with their partner. At thesame time, participants experienced challenges in using the appto communicate what they wanted to their partner. We describethese opportunities and challenges for integrating biosignals intocommunication in more detail below.
Our results suggest that sharing sensed states can promote efficientand personal communication between couples, and help them feelconnected with each other. This aligns with prior work [42], whichsimilarly showed that people can easily keep in touch throughsharing biosignals-driven animated shapes on their smartwatch.We build on this work through our comparison of participants’usage of Significant Otter with sensing OFF and ON. Specifically,we suggest that the app with sensing OFF functioned similar toemojis, stickers, and GIFs, while sensing ON introduced a new,enhanced form of communication.
Easier sharing.
Participants felt that sharing from sensed statesuggestions was easier than sharing from a randomized list of ani-mations. With sensing OFF, participants scrolled through the listof state animations as if they were scrolling through a shorteremoji/sticker/GIF keyboard. They appreciated the readily available,unintrusive messages that helped them quickly communicate backand forth with their partner without having to use words. How-ever, some participants were frustrated with access to only two tofive random states, expecting a wider and more expressive variety,while others appreciated that they did not need to look throughhundreds of options for a specific one. After updating to sensingON, participants perceived that the state list was personalized tothem, where the sensed states were more accurate to how they werefeeling or what they were doing than the randomized states. Thus,their otter became more representative of them and was easier tosend to their partner. “The sensed state otters [with sensing ON] hit closerto home than [with sensing OFF], where they were justrandom otters. So it narrowed it down better.” - P3This was reinforced by the smartwatch, which participants weremore compelled to use in the latter half of the study in order for thesensing feature to work. P14 noted that the sensing feature gavehim a reason to use his watch: “I’m not really super attached to this watch comparedto my phone.... So I think having a reason to want touse the watch for this app and its sensing my vitals andthings like that is...a good thing to include in this typeof app.” - P14 The smartwatch prompted participants with notifications thatbecame personalized suggestions about how they were feeling withsensing ON, rather than dismissable nudges to use the app withsensing OFF. When participants viewed the notifications withoutopening the app, participants simply made “yes or no” decisions toshare their otter rather than scrolling through emoji-like optionsor even think to send their otter on their own. “Because it knows exactly how you’re feeling versuslike me having to look through it and kinda of ticksomething. Because sometimes I don’t even know howI’m feeling...you don’t really think about how you’refeeling until you have to...sit and think about it.” - P9
Less ambiguity.
Sharing a sensed otter was also less ambiguousthan sharing a randomized otter. With sensing OFF, participantsassigned various meanings to the otter animations, including thosethat they were not originally designed for, such as suggestions andneeds. Participants would even send animations with no intendedmeaning, just to send one to their partner. This flexibility in the ot-ter animations aligns with the flexibility of emojis, where an emojican be used to convey numerous possible messages [49, 68], and arethus known to be expressive yet ambiguous even when used in tex-tual contexts [48]. With sensing OFF, some participants describedfollowing-up over verbal conversation to clarify the animationsthey sent, or struggling to interpret and respond to the animationsthey received, subsequently resorting to “safe” otter reacts. “Like that walking otter...she might [react with a]thumbs up, but that doesn’t necessarily mean that...she’sunderstanding that the walking otter means that I wouldlike to go for a walk...the thumbs up could be, ‘yes, Iwant to go for a walk too.’ It could be like, ‘walking isgood.’ So I mean, there’s not a lot of clarity with justhaving the simplistic reaction.” - P38 “I was confused about that otter...out of my confusion Ireplied with him because he looked pretty chill.... I wasjust trying to say no hostility as well, ’cause I didn’tknow what the other otter was doing. So that was apretty safe response.” - P33Our results suggest that biosignals helped reduce this ambiguity,where participants no longer assigned different meanings to theanimations. Instead, the animations became meaningful on theirown, where participants understood them as simply representa-tive of their own or their partner’s current state. This facilitatedsending state animations, because participants no longer had tothink about what they could mean and, like in prior work [41], sentthem primarily to share their current state. This also facilitatedunderstanding and responding to those animations, because theycould appropriately react when they understood what they meant,such as by agreeing, reciprocating, or showing concern for theirpartner’s state. “When I was sending the message, my intent behind itwasn’t, ‘Oh, I’m reminding you to do something.’ I’msharing my mood with you...[sensing OFF] was justmore inclined to him and [sensing ON] was more in-clined to me.” - P36 ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan “I think it’s different insofar that I felt like she reallywasn’t sending random ones. I feel like they were morebased on what she was doing. So for that reason I feltlike my reactions were more consistent.” - P39
Authenticity.
Participants felt that sharing sensed otter anima-tions enabled more open and genuine communication with theirpartner. While couples used both versions of the app to keep intouch with each other’s current state, they felt that sensing ONenabled a more personal experience with each other because it wasbacked by data. Participants described feeling more connected toboth their own and their partner’s otter because they were tied totheir bodies’ physical states, as if they were the otters themselves. “It’s personal to me because it’s reading what I’m do-ing...it’s almost as if you could go through the phoneyourself and wave or something like that....[With sens-ing OFF, it] could have just been a sticker app, an iMes-sage where you’re just sending from a collection of ani-mated stickers. Once it [sensed] what you were doingthroughout the day, it [became] a more personal expe-rience...because it’s sensing what you’re wanting to saythroughout the day.” - P35 “It was interesting that both of our body’s responseswere being recorded. That’s what I mean by feelingconnected like we’re not physically together, but you’restill able to get a sense of their actual bodily responsesthrough the app, like through technology, and that wascool.” - P31Seeing their own sensed state also encouraged them to reflect onhow they were feeling, and be more honest with both themselvesand their partner by sharing it with them. Participants similarlyfelt that their partner was conveying their honest state with them.Some couples noted that even if they are fairly open with each other,they appreciated knowing that their partner’s state was backed bydata and that their partner was not just putting up a front. “I’m pretty open with my feelings overall in life andwith my partner, [but with Significant Otter] I’m moreopen to be like, honest, I guess, like totally 100% honestcompared to 95% honest...the 5% can sometimes make abig difference.... I would send the [stressed otter] insteadof being like, ‘Oh, I don’t want to look weak right nowby showing that I am stressed.”’ - P14 “I feel like [with sensing OFF]...I wouldn’t know if thatwas actually how he was feeling or if he just picked [asmiley one]...just to send something nice. So knowingthat he actually felt that way and [was] probably alittle bit happy...was good.” - P37This motivated a few participants to be more thoughtful andresponsive in their reactions to their partner, such as reacting morequickly or deliberately. For example, P31, who tended to not respondimmediately to her partner’s state with sensing OFF, felt moreurgency to respond with sensing ON: “ I feel like [the otter’s] a way of him reaching out. Sofor me to just wait [to] respond and not really think much of it, it feels rude not to validate whatever he sentout, because that is...like a virtual extension of him. So Ifelt like I needed to respond to it as soon as I saw.” - P31P36, who frequently used the quick reacts with sensing OFF, usedthem less with sensing ON. She explained that since her partnerwas sharing his emotions with her, she should put more effort intoreacting: “Although [quick reacts were] very convenient, I just feltmore of a responsibility this time to [open the app]. Justbecause I felt like my partner was sending me state ottersoff his emotion. [Doing] a quick react otter...it was kindof dismissing the notification in a sense. Opening up theapp and scrolling through all reacts so I could choosethe right one made me feel like I was more connectedwith my partner in the interaction.” - P36
As a system that recommends a user’s current state, SignificantOtter with sensing ON experienced challenges in how participantsperceived and trusted the sensed states. Though most participantsfelt that their sensed otter accurately reflected their feelings andactivities, six participants were skeptical of the system’s abilityto sense states and disagreed with the suggestions they saw. Thesensed states were also restrictive, where the participants believedthey were less likely to find an animation they wanted to send,while randomization presented equal probability of seeing all states.These participants stated that they would have preferred a list ofstates with more variety to choose from. “Whenever it would send something I would usually getthe same otters. So I wish when it was sensing somethingI would be able to get a variety of otters at differenttimes...compared to just one all the time.” - P29
Subjective understandings of sensing and emotions.
While percep-tions of inaccuracy were a barrier for some participants, participantsranged in their definition of accuracy, which affected their abilityto send their own otter and understand their partner’s otter. Someparticipants were accepting of the system’s small set of possiblesensed states, and gave the app room for error. They did not expectthe sensed states to be 100% accurate and reasoned why the appwould suggest states that did not quite match them, based on theirknowledge of their heart rate or physical state. These participantsalso described typically being satisfied with at least one state in thelist of suggested states, and did not mind if the other states did notfully match them. “I would say most of the time it definitely matched. Imean it was pretty much on par with what I was doing,which was really cool to see. Sometimes after I would gofor a walk or something, it would tell me I was surprisedrather than like working out. But it was like a walk, soI was not doing crazy workouts. So I could see how thathappened.” - P34 “Most people at any given time throughout the day,you might be feeling a lot of different things...at least
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. with the sensed version...at least one of the things thatit was showing you [matched].” - P38Conversely, participants who perceived the app as inaccuratetended to expect exactly one accurate state, giving less flexibilityfor the app to suggest other states that may not match their feelings.These discrepancies in perceptions of accuracy appear to stem fromparticipants’ different lay understanding of emotions and how theyrelate to heart rate. For instance, P1 was confident in knowingher own feelings better than the app, while P15 was conflicted onwhether to follow his body (what the app suggested to him) or mind(how he thinks he feels). On the other hand, P14, quoted above, felthis heart was an indicator of how he truly felt, as opposed to howhe thought he felt in his mind.
I feel like I know what I feel like...this thing is guessinghow I feel based on I don’t know what my heartbeator...I don’t think that’s accurate.” - P1 “I just never knew which one was the most accurate....I really just thought it was that first one [in the statelist and] the other ones could have been random [or]maybe a second best choice. I just didn’t really figureout which one to go with, you know? I’m like betrayingmy heart rate if I choose a different one [than the firstone] or something.” - P15 Agency and effort in communicating feelings.
On the other ex-treme, a few participants described blindly trusting the system andsending their otter from the sensed state notifications even if theydid not know which animation they were sending. Though thesensing feature was not intended to be highly accurate, these par-ticipants felt the system knew their feelings better than they did,and helped them to convey those feelings to their partner. “The first two or three times [that I got the sensednotifications] they were on target as far as...the way Iwas feeling. . . . That would be the otter that I was tryingto send anyway...so once I realized that that’s what washappening, I wouldn’t think too much about it anymoreas far as opening up the app and seeing if it was theotter I wanted to send or anything like that.” - P23One participant warned against this “power of suggestion,” wherethe system could influence them into thinking they felt a certainway. This aligns with prior work by Hollis and colleagues [19],which suggests that people may overly trust emotion sensing sys-tems and be influenced by the system’s interpretations of theiremotions. “With [sensing ON] it was like always asking yourselfwhether or not you really felt that way before sending it.And so I don’t know if sometimes that would influenceyou to send it anyways or influence you to maybe feelthat way. Yeah so, I do prefer [sensing OFF,] that waywhatever you’re feeling...you’re able to just think of iton your own and just send it.” - P18Another participant noticed that by simply accepting and sharingthe system’s recommendation, he put less thought into curatinga message to send to his partner. He pointed out that while the message itself was personalized to himself, he no longer took thetime and effort to consider what to send: “I think with [sensing ON] it was me sending stuff basedon what I think the watch read that I felt. So it wasn’tme taking the time and going through and saying, yeah,this is the one. It was like, the watch said this is how Ifeel. So I guess this is how I feel. Let me send it. [It] wasalmost more impersonal, even though it was reading offof my data.” - P2Though the reduced effort made keeping in touch easier, effortis an important quality of communication that contributes to mean-ingful connection and close relationships [30, 31]. Moreover, workon AI-mediated communication suggests that systems that generatemessages for communication, such as Significant Otter’s sensedstate animations, can affect perceptions of authenticity [51] andtrustworthiness [25] in the sender of the message. Thus, despitesensed states being inherently more personal and intimate, theycould potentially prompt less personal ways of communicating ifthe system has more agency than the user. In the following section,we recommend future research directions and system designs toexplore how to reconcile this tension.
Overall, participants viewed both versions of Significant Otter asa lightweight communication channel that enabled them to easilyto keep in touch with their partner and let each other know howthey are doing. In their baseline usage of the app with sensingOFF, participants felt the otter animations were an easy way tocommunicate without words, using them to convey their currentstate and suggest activities. However, this communication was alsolimited because a simple animation could mean multiple things orrequired more detail.For most participants, Significant Otter with sensing ON miti-gated some of the issues with sensing OFF and enhanced partic-ipants’ ability to connect with each other. Our results show thatbiosignals can facilitate each stage of communication (
RQ1 ), wheresending, understanding, and responding to the state animationswere easier with sensing ON than with sensing OFF, because the appcurated more straightforward messages for them to send, distilled toparticipants’ sensed states as opposed to user-prescribed meanings.Biosignals also supported feelings of connection between romanticpartners (
RQ2 ), where participants described feeling compelled toshare honest emotions through the sensed state animations, ratherthan put up a front. Additionally, they felt the sensed otters weremore representative of them and their partner, conveying a senseof their body’s physical responses. Taken together, our results de-scribe the role of biosignals in dyadic communication, distinct fromsystem features such as the smartwatch or animated avatars [42], aspromoting easier and more authentic communication. However, de-spite these benefits, biosignals also introduced new concerns aroundaccuracy and agency over the message, where some participantsfelt the system was overly suggestive on how they were feelingor what to communicate to their partner. This further illustratesthe tensions in AI-recommended versus subjectively interpreted ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan emotions found in prior work [19, 22], particularly due to variationsin lay understanding or confidence around one’s own emotions.
While having aseparate platform like Significant Otter dedicated to sharing statescan create an intimate experience for couples, the sensed otter ani-mations could easily integrate with existing platforms as “enhancedemojis.” People already increasingly need to navigate multiple com-munication apps, which can cause “expression breakdowns” whenthey are unable to consistently express themselves across thoseapps [13]. By integrating biosignals into existing platforms, peoplecould benefit from centralized communication with their partnerswhile expressing themselves in more authentic ways through thesensed states. In platforms such as texting and mobile messaging,they could also could easily start new conversations about the statesthey share, a common pattern we observed in our study.As part of existing platforms, biosignals would primarily func-tion as a means to augment communication as opposed to acting asstandalone messages like in Significant Otter. Rather than relyingon relationship context, people would reference the augmentedcommunication content to interpret the biosignals (e.g., text inmobile messaging [18, 41]). Researchers and designers of commu-nication platforms could explore how biosignals could augmentvarious types of communication content, such as images, videos, oremojis, and new interaction patterns that may emerge. For example,biosignals could become new types of “emojis” or integrate with ex-isting emojis by suggesting specific emojis or limiting the availableoptions. This could help people navigate the ever growing list ofemojis, as well as clarify potentially ambiguous emojis. Suggestedemojis could be annotated in order to designate them as sensedstates (e.g., a heart symbol, beats per minute, or special effect orbadge attached to the image).
We found that vary-ing perceptions of accuracy and agency over the animations af-fected participants’ ability to use the sensing ON version of theapp. Given people’s own subjective understanding of their stateas well as ongoing research on emotion detection, designers needto consider how to present and incorporate sensing technologyboth in its current and future levels of accuracy. That is, even ifthe system claims to be accurate based on the user’s physical state,the detected emotion may conflict with how the user subjectivelybelieves they feel [42] or overly influence their feelings [19, 22].For Significant Otter, we lowered the autonomy of the system [25],where the app suggested both a single state in notifications and alist of possible states within the app. Some participants continuedto be skeptical of the suggested states, having strong beliefs abouthow they are feeling, while others were confused by our design,believing that they should see only one recommendation. In fact,a few people did focus solely on the one recommendation in thenotification, ignoring the list of in-app states.Future directions in this area should investigate new ways for ex-pressive biosignals systems to collaborate with people’s subjectiveunderstanding of their own state. For example, researchers couldexplore systems that support different lay theories of emotionsand how they affect perceptions of the system’s accuracy, such as whether the user interprets emotions based on external contextualcues or internal physiological experiences [62, 69]. Designers ofthese systems should clearly and carefully introduce how sensingin the system works. For example, onboarding steps could detail thesystem’s approach to emotion (e.g., its relationship to the body’sphysiological state, why the system might suggest multiple pos-sible emotions), or provide adjustable settings that match one’spersonal understanding of their own state. Future work could alsoexplore how to involve people in system recommendations, suchthat they can have more control over what they are feeling andhow they share those feelings. For example, the system could allowpeople to provide feedback on their state in order to improve thesystem and feel involved in the system’s suggestions, or promptthem to interpret the suggested state before sharing it with theirpartner. This could encourage people to engage in more effort andmeaning-making with their partner, and enhance the authenticityof the AI-recommended message.Finally, researchers should consider how people’s understandingof their state and expectations for the system may be affected bydifferent types of expressive biosignals. People may have a moredeveloped understanding about their heart rate given its accessi-bility, including in fitness trackers and watches, or simply beingable to feel when one’s own heart beats faster. On the other hand,less accessible biosignals such as skin conductance or brain activitymay be less understood or produce different lay interpretations(e.g., associating the brain with cognitive functions), which couldpotentially affect people’s willingness to accept the system’s staterecommendations.
Though our findings elucidate the value of expressive biosignals incommunication, there are several limitations to this work. First, weran a non-counterbalanced within-subjects study in order to reduceconfusion in participants’ mental model of the app, where sensingwas a “feature update” rather than a feature being removed. Mostparticipants perceived sensing ON as a feature update that enhancedtheir use of the app; however, a few were strongly influenced bytheir mental model of the app with sensing OFF and expected it towork the same way. Moreover, novelty effects were much strongerfor sensing OFF than for sensing ON. The number of sent messagesdropped by 605 messages between the first and second week ofusing sensing OFF, compared to a drop of 77 messages betweenthe first and second week of using sensing ON. Many participantsalso described getting used to the app during the second half ofthe study. We took these differences into consideration during bothour interviews and analyses; however, future work should considera between-subjects design or longer longitudinal study to reducepotential order or novelty effects.Second, we deployed the app in situ on participants’ own smart-watches for use in their everyday lives, in order to achieve highecological validity. Given the differences in participants’ lifestyles,especially during the COVID-19 pandemic, as well as tendenciestowards different devices (e.g., participants with large hands expe-riencing difficulty interacting with the watch app interface), partic-ipants naturally had diverse experiences with the app. Additionally,the app had hard-coded times set for certain states, which may
HI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. not have matched people’s typical patterns. Future systems shouldconsider providing options for people to set their typical meal orsleeping times, or sense additional signals such as location to betterpredict their state. Limitations of the Apple Watch OS also affectedwhether the app worked as intended for all participants, wheresome participants received no notifications while others felt thatthey received too many. Thus, while our qualitative findings presenta variety of interesting communication patterns that stem fromparticipants’ diverse usage, studies with greater levels of controlor larger sample sizes are necessary to clarify potential causal ef-fects that biosignals have on communication. More granular datacollection would also help to capture and further understand thedifferences in participants’ usage, such as the number of notifica-tions that influenced sending a state or participants’ perceptions ofeach sent state.Finally, while we recruited a diverse sample of participants fromdifferent backgrounds, participants were self-selected and may haveshown a greater interest in wearable and couple-specific technolo-gies. Additionally, the shortest relationship length among partici-pants was 11 months. People in earlier stages of their relationshipor without established communication practices with each othermay use the app in different ways. Given stay-at-home orders, wealso restricted recruitment to people who were living apart fromtheir partner or living together if one or both of them were essentialworkers. Thus, we were unable to capture how people that did notmatch these criteria might use the app outside of these unusualcircumstances. It is also possible that our participants would engagewith the app differently outside of these circumstances, as manyhad to adjust to changes in their daily routine during the study.
We ran a month-long within-subjects field study on Significant Ot-ter, an Apple Watch and iPhone app that enables biosignals sharingthrough animated otter messages, to explore the role of biosignalsin communication. We compared participants’ usage of SignificantOtter with and without biosignals. Results showed that biosignalscan support easier and more authentic communication, while elicit-ing concerns around accuracy and agency over the communicationcontent based on participants’ diverse understandings of emotions.We provide insights on the opportunities and challenges aroundintegrating biosignals into communication and make recommenda-tions for future research and design, including applying biosignalsto existing communication platforms to promote open communica-tion as “enhanced emojis,” and exploring greater user involvementin AI-recommended states.
ACKNOWLEDGEMENTS
We would like to thank John Tang and Mayank Goel for their earlysuggestions that helped shape our study design, as well as theirfeedback on our results. We are also grateful to David Lin for hishelp with our qualitative analysis, and Sven Kratz for his feedbackon our paper submission.
REFERENCES [1] Arthur Aron, Elaine N Aron, and Danny Smollan. 1992. Inclusion of Other in theSelf Scale and the structure of interpersonal closeness.
Journal of Personality andSocial Psychology
63, 4 (1992), 596. https://doi.org/10.1037/0022-3514.63.4.596 [2] Elizabeth Bales, Kevin A Li, and William Griwsold. 2011. CoupleVIBE: mobileimplicit communication to improve awareness for (long-distance) couples. In
Proceedings of the ACM 2011 conference on Computer supported cooperative work .ACM, 65–74.[3] Frank R Bentley and Crysta J Metcalf. 2007. Sharing motion information withclose family and friends. In
Proceedings of the SIGCHI conference on Human Factorsin computing systems
Handbook of communicationand social interaction skills ([n. d.]), 551–594.[6] Hyunsung Cho, Jinyoung Oh, Juho Kim, and Sung-Ju Lee. 2020. I Share, You Care:Private Status Sharing and Sender-Controlled Notifications in Mobile InstantMessaging.
Proceedings of the ACM on Human-Computer Interaction
4, CSCW1(2020), 1–25.[7] Scott Counts and Eric Fellheimer. 2004. Supporting social presence throughlightweight photo sharing on and off the desktop. In
Proceedings of the SIGCHIconference on Human factors in computing systems . 599–606.[8] Lisa G Cowan. 2011.
Lightweight social communication using visual media andmobile phones . Ph.D. Dissertation. UC San Diego.[9] Max T. Curran, Jeremy Raboff Gordon, Lily Lin, Priyashri Kamlesh Sridhar, andJohn Chuang. 2019. Understanding Digitally-Mediated Empathy: An Explorationof Visual, Narrative, and Biosensory Informational Cues. In
Proceedings of the 2019ACM Conference on Human Factors in Computing Systems . ACM, 614:1–614:13.https://doi.org/10.1145/3290605.3300844[10] Anind K Dey and Ed de Guzman. 2006. From awareness to connectedness:the design and deployment of presence displays. In
Proceedings of the SIGCHIconference on human factors in computing systems . ACM, 899–908.[11] Maria Egger, Matthias Ley, and Sten Hanke. 2019. Emotion recognition fromphysiological signal analysis: A review.
Electronic Notes in Theoretical ComputerScience
343 (2019), 35–55.[12] Elisabeth Eichhorn, Reto Wettach, and Eva Hornecker. 2008. A stroking devicefor spatially separated couples. In
Proceedings of the 10th international conferenceon Human computer interaction with mobile devices and services . ACM, 303–306.[13] Carla F Griggio, Joanna Mcgrenere, and Wendy E Mackay. 2019. Customizationsand Expression Breakdowns in Ecosystems of Communication Apps.
Proceedingsof the ACM on Human-Computer Interaction
3, CSCW (2019), 1–26.[14] Carla F Griggio, Midas Nouwens, Joanna Mcgrenere, and Wendy E Mackay. 2019.Augmenting Couples’ Communication with Lifelines: Shared Timelines of MixedContextual Information. In
Proceedings of the 2019 CHI Conference on HumanFactors in Computing Systems . ACM, 623.[15] Antal Haans and Wijnand IJsselsteijn. 2006. Mediated social touch: a review ofcurrent research and future directions.
Virtual Reality
9, 2-3 (2006), 149–159.[16] Marc Hassenzahl, Stephanie Heidecker, Kai Eckoldt, Sarah Diefenbach, and UweHillmann. 2012. All you need is love: Current strategies of mediating intimate rela-tionships through technology.
ACM Transactions on Computer-Human Interaction(TOCHI)
19, 4 (2012), 1–19.[17] Mariam Hassib, Daniel Buschek, PawelW W Wozniak, and Florian Alt. 2017.HeartChat: Heart Rate Augmented Mobile Chat to Support Empathy and Aware-ness. In
Proceedings of the 2017 CHI Conference on Human Factors in ComputingSystems . ACM, 2239–2251. https://doi.org/10.1145/3025453.3025758[18] Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, AlbrechtSchmidt, and Florian Alt. 2017. EngageMeter: A System for Implicit AudienceEngagement Sensing Using Electroencephalography. In
Proceedings of the 2017CHI Conference on Human Factors in Computing Systems . ACM, 5114–5119.[19] Victoria Hollis, Alon Pekurovsky, Eunika Wu, and Steve Whittaker. 2018. Onbeing told how we feel: how algorithmic sensor feedback influences emotionperception.
Proceedings of the ACM on Interactive, Mobile, Wearable and UbiquitousTechnologies
2, 3 (2018), 1–31.[20] Noura Howell, John Chuang, Abigail De Kosnik, Greg Niemeyer, and KimikoRyokai. 2018. Emotional Biosensing: Exploring Critical Alternatives.
Proceedingsof the ACM on Human-Computer Interaction
2, CSCW (2018), 69. https://doi.org/10.1145/3274338[21] Noura Howell, Laura Devendorf, Rundong Kevin Tian, Tomás Vega Galvez, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos, and Kimiko Ryokai. 2016. Biosignals asSocial Cues: Ambiguity and Emotional Interpretation in Social Displays of SkinConductance. In
Proceedings of the 2016 ACM Conference on Designing InteractiveSystems . ACM, 865–870.[22] Noura Howell, Laura Devendorf, Tomás Alfonso Vega Gálvez, Rundong Tian, andKimiko Ryokai. 2018. Tensions of Data-Driven Reflection: A Case Study of Real-Time Emotional Biosensing. In
Proceedings of the 2018 CHI Conference on HumanFactors in Computing Systems . ACM, 431. https://doi.org/10.1145/3173574.3174005[23] Noura Howell, Greg Niemeyer, and Kimiko Ryokai. 2019. Life-Affirming Biosens-ing in Public: Sounding Heartbeats on a Red Bench. In
Proceedings of the 2019CHI Conference on Human Factors in Computing Systems (CHI ’19) . ACM, NewYork, NY, USA, Article 680, 16 pages. https://doi.org/10.1145/3290605.3300910 ignificant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan [24] Wijnand IJsselsteijn, Joy van Baren, and Froukje van Lanen. 2003. Staying intouch: Social presence and connectedness through synchronous and asynchro-nous communication media.
Human-Computer Interaction: Theory and Practice(Part II)
2, 924 (2003), e928.[25] Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T Hancock, and Mor Naaman.2019. AI-mediated communication: How the perception that profile text waswritten by AI affects trustworthiness. In
Proceedings of the 2019 CHI Conferenceon Human Factors in Computing Systems . 1–13.[26] Joris H Janssen, Jeremy N Bailenson, Wijnand A IJsselsteijn, and Joyce HDMWesterink. 2010. Intimate Heartbeats: Opportunities for Affective Communi-cation Technology.
IEEE Transactions on Affective Computing
Interpersonal communication
CHI’06 extended abstracts on human factors incomputing systems . 363–368.[29] Joseph ’Jofish’ Kaye, Mariah K Levitt, Jeffrey Nevins, Jessica Golden, and VanessaSchmidt. 2005. Communicating intimacy one bit at a time. In
CHI’05 extendedabstracts on Human factors in computing systems . ACM, 1529–1532.[30] Ryan Kelly, Daniel Gooch, Bhagyashree Patil, and Leon Watts. 2017. Demand-ing by design: Supporting effortful communication practices in close personalrelationships. In
Proceedings of the 2017 ACM Conference on Computer SupportedCooperative Work and Social Computing . 70–83.[31] Ryan Kelly, Daniel Gooch, and Leon Watts. 2018. ’It’s More Like a Letter’ AnExploration of Mediated Conversational Effort in Message Builder.
Proceedingsof the ACM on Human-Computer Interaction
2, CSCW (2018), 1–23.[32] Sara Kiesler, Jane Siegel, and Timothy W McGuire. 1984. Social PsychologicalAspects of Computer-Mediated Communication.
American Psychologist
39, 10(1984), 1123.[33] Susanna Kim. 2014. Cities’ Bedtimes Revealed in One Map. https://abcnews.go.com/Business/map-claims-show-people-sleep/story?id=26042978[34] Andrei P Kirilenko and Svetlana Stepchenkova. 2016. Inter-coder agreement inone-to-many classification: fuzzy kappa.
PloS one
Proceedings of the 7th International Conference on Tangible, Embedded andEmbodied Interaction . ACM, 201–204.[37] Kurt Kräuchi and Anna Wirz-Justice. 2001. Circadian clues to sleep onset mecha-nisms.
Neuropsychopharmacology
25, 1 (2001), S92–S96.[38] Ronald B Larson. 2002. When is dinner?
Journal of Food Distribution Research
Journal of personality and social psychology
74, 5 (1998), 1238.[40] Fannie Liu, Laura Dabbish, and Geoff Kaufman. 2017. Can Biosignals be Ex-pressive? How Visualizations Affect Impression Formation from Shared BrainActivity.
Proceedings of the ACM on Human-Computer Interaction
1, CSCW (2017),71:1–71:21. https://doi.org/10.1145/3134706[41] Fannie Liu, Laura Dabbish, and Geoff Kaufman. 2017. Supporting Social Inter-actions with an Expressive Heart Rate Sharing Application.
Proceedings of theACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
1, 3 (2017),77:1–77:26. https://doi.org/10.1145/3130943[42] Fannie Liu, Mario Esparza, Maria Pavlovskaia, Geoff Kaufman, Laura Dabbish, andAndrés Monroy-Hernández. 2019. Animo: Sharing Biosignals on a Smartwatchfor Lightweight Social Connection.
Proceedings of the ACM on Interactive, Mobile,Wearable and Ubiquitous Technologies
3, 1 (2019), 18:1–18:19. https://doi.org/10.1145/3314405[43] Fannie Liu, Geoff Kaufman, and Laura Dabbish. 2019. The Effect of ExpressiveBiosignals on Empathy and Closeness for a Stigmatized Group Member.
Proceed-ings of the ACM on Human-Computer Interaction
3, CSCW (2019), 201:1–201:17.[44] Shao-Kang Lo. 2008. The nonverbal communication functions of emoticons incomputer-mediated communication.
CyberPsychology & Behavior
11, 5 (2008),595–597.[45] Erina L. MacGeorge, Bo Feng, and Brant R. Burleson. 2011. Supportive Com-munication. In
Handbook of Interpersonal Communication , Mark L. Knapp andJohn A. Daly (Eds.). SAGE, Thousand Oaks, CA, 317–354.[46] Nick Merrill and Coye Cheshire. 2017. Trust Your Heart: Assessing Cooperationand Trust with Biosignals in Computer-Mediated Interactions. In
Proceedings ofthe 2017 ACM Conference on Computer Supported Cooperative Work and SocialComputing . ACM, 2–12. https://doi.org/10.1145/2998181.2998286[47] Nick Merrill, John Chuang, and Coye Cheshire. 2019. Sensing is Believing: WhatPeople Think Biosensors Can Reveal About Thoughts and Feelings. In
Proceedingsof the 2019 on Designing Interactive Systems Conference . 413–420. [48] Hannah Miller, Daniel Kluver, Jacob Thebault-Spieker, Loren Terveen, and BrentHecht. 2017. Understanding emoji ambiguity in context: The role of text inemoji-related miscommunication. In . AAAI Press.[49] Hannah Jean Miller, Jacob Thebault-Spieker, Shuo Chang, Isaac Johnson, LorenTerveen, and Brent Hecht. 2016. "Blissfully Happy" or "Ready to Fight": VaryingInterpretations of Emoji. In
Proceedings of the 10th International Conference onWeb and Social Media .[50] Hyeryung Christine Min and Tek-Jin Nam. 2014. Biosignal sharing for affectiveconnectedness. In
CHI’14 Extended Abstracts on Human Factors in ComputingSystems . ACM, 2191–2196. https://doi.org/10.1145/2559206.2581345[51] Andrés Monroy-Hernández, Benjamin Mako Hill, Jazmin Gonzalez-Rivero, andDanah Boyd. 2011. Computers can’t give credit: How automatic attribution fallsshort in an online remixing community. In
Proceedings of the SIGCHI Conferenceon Human Factors in Computing Systems . 3421–3430.[52] Florian ’Floyd’ Mueller, Frank Vetere, Martin R Gibbs, Jesper Kjeldskov, SonjaPedell, and Steve Howard. 2005. Hug over a distance. In
CHI’05 extended abstractson Human factors in computing systems . 1673–1676.[53] Jonathan Posner, James A Russell, and Bradley S Peterson. 2005. The circumplexmodel of affect: An integrative approach to affective neuroscience, cognitivedevelopment, and psychopathology.
Development and psychopathology
17, 3(2005), 715–734.[54] Katherine A Sauder, Elyse R Johnston, Ann C Skulas-Ray, Tavis S Campbell,and Sheila G West. 2012. Effect of meal content on heart rate variability andcardiovascular reactivity to mental stress.
Psychophysiology
49, 4 (2012), 470–477.[55] Nathan Semertzidis, Michaela Scary, Josh Andres, Brahmi Dwivedi, Yutika Chan-drashekhar Kulwe, Fabio Zambetta, and Florian Floyd Mueller. 2020. Neo-Noumena: Augmenting Emotion Communication. In
Proceedings of the 2020CHI Conference on Human Factors in Computing Systems . 1–13.[56] Emma Seppala, Timothy Rossomando, and James R Doty. 2013. Social connectionand compassion: Important predictors of health and well-being.
Social Research:An International Quarterly
80, 2 (2013), 411–430. https://doi.org/10.1353/sor.2013.0027[57] Petr Slovák, Joris Janssen, and Geraldine Fitzpatrick. 2012. Understanding HeartRate Sharing: Towards Unpacking Physiosocial Space. In
Proceedings of the SIGCHIConference on Human Factors in Computing Systems . ACM, 859–868. https://doi.org/10.1145/2207676.2208526[58] A Strauss and J Corbin. 1998. Basics of qualitative research techniques. (1998).https://doi.org/10.4135/9781452230153[59] Ying Tang and Khe Foon Hew. 2018. Emoticon, emoji, and sticker use in computer-mediated communications: Understanding its communicative function, impact,user behavior, and motive. In
New Media for Educational Change . Springer,191–201.[60] Martin Tanis and Tom Postmes. 2003. Social cues and impression formation inCMC.
Journal of Communication
53, 4 (2003), 676–693. https://doi.org/10.1111/j.1460-2466.2003.tb02917.x[61] Ye Tian, Thiago Galery, Giulio Dulcinati, Emilia Molimpakis, and Chao Sun. 2017.Facebook sentiment: Reactions and emojis. In
Proceedings of the Fifth InternationalWorkshop on Natural Language Processing for Social Media . 11–16.[62] Yukiko Uchida, Sarah SM Townsend, Hazel Rose Markus, and Hilary B Bergsieker.2009. Emotions as within or between people? Cultural variation in lay theoriesof emotion expression and inference.
Personality and social psychology bulletin
35, 11 (2009), 1427–1439.[63] Joseph B Walther. 2011. Theories of computer-mediated communication andinterpersonal relations.
The handbook of interpersonal communication
Proceedings of the 10th Conference on Human-ComputerInteraction with Mobile Devices and Services . ACM, 535–538. https://doi.org/10.1145/1409240.1409338[65] Richard West and Lynn H Turner. 2018.
Introducing Communication Theory:Analysis And Application . McGraw-Hill Education,.[66] Jason Wiese, Patrick Gage Kelley, Lorrie Faith Cranor, Laura Dabbish, Jason IHong, and John Zimmerman. 2011. Are you close with me? are you nearby?:investigating social groups, closeness, and willingness to share. In
Proceedings ofthe 13th International Conference on Ubiquitous Computing . ACM, 197–206.[67] Daniel H Wilson and Christopher Atkeson. 2003. The narrator: A daily activitysummarizer using simple sensors in an instrumented environment. In
The FifthInternational Conference on Ubiquitous Computing 2003 Demonstrations .[68] Sarah Wiseman and Sandy JJ Gould. 2018. Repurposing Emoji for PersonalisedCommunication: Why means “I love you”. In
Proceedings of the 2018 CHI Confer-ence on Human Factors in Computing Systems . ACM, 152.[69] Vanda L Zammuner. 2000. Men’s and women’s lay theories of emotion.
Genderand emotion: Social psychological perspectives (2000), 48–70. aily Survey Questions
1. Please describe one noteworthy experience you had with the app today (e.g.,whether you noticed anything new, learned a new function, used the app in a new way).
Sending your State Otter
STATE otters are the otters you see on your main app screen that you can send to your partner to “initiate” an interaction. REACT otters are the ones on the “Tap to React” screen that you can send to your partner to “respond” to their otter. Our records indicate that you sent the following state otter animation today, around (time).
Receiving your Partner’s State Otter
Our records indicate that you received the following state otter animation from your partner today around (time). Please answer the questions below about viewing your partner’s state otter animation.
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al.
GIF of the state animation received> 1. What did you think of your partner’s state otter animation when you received it? 2. What message do you think your partner was trying to convey with this animation? Our records indicate that you reacted with the following animation:
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan id-Study Interview Questions
General
1. What has your experience with the app been like overall so far?
Phone vs Watch
1. Did you find yourself using the app more on your phone or your watch? Why? 2. How did you feel about using one vs the other? 3. Were there any differences in how you used it on the watch vs the phone?
Notifications
1. You mentioned in some of the daily surveys that you’ve noticed notifications such as (describe notifications they mentioned). a. What did you think of these notifications? b. When did you decide to open them, vs dismiss or ignore them? 2. Did you use the quick react or share button features? When and why? 3. Did you ever look at the app on your own, without first seeing a notification? a. Was this on your phone or watch? Why?
Sharing Behavior
Now we’re going to talk about the state otters you sent to your partner, that they may or may not have reacted to. 1. Here’s an example of a state otter you sent to your partner on (time). They reacted with (describe any react otters). a. Can you talk about what was happening when you sent it? b. What did you think of your partner’s reaction? 2. Here are some examples of the state otters you’ve sent over the past two weeks (show slide of top 5), and some examples of the react otters your partner sent to you. Note that these are not necessarily matched up with each other. a. Do you recall sending these state otters? b. Can you give me a few examples of those times? c. How did your partner react? d. What did you think about the way they reacted? 3. Were there any state otter animations, provided in the app or not, that you wish you could have sent to your partner? Why or why not?
React Behavior
Now we’re going to talk about the otters you received from your partner, and may or may not have reacted to. 1. Here’s an example of a state otter you received from your partner on (time). a. Can you talk about what you thought when you received this otter from your partner?
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. . Why did you decide to send that react otter? (or not react) 2. Here are some examples of the state otters you received from your partner over the past two weeks (show slide of top 5), and some examples of react otters you sent to them. Note that these are not necessarily matched with each other. a. Do you recall seeing these state otters from your partner? b. Can you give me a few examples of those times? c. What do you think they were trying to convey to you? d. How did you react, if at all? e. Why did you react in this way? f. (if they sent a react otter) How did you decide which otter to react with? g. What were you trying to convey to your partner? 3. In the examples you described, were there any ways you wish you could have reacted to your partner, but weren’t able to within the app? 4. Were there any otter animations, provided in the app or not, that you wish you could have used to react to your partner? Why or why not?
Communication Behavio r Now I want to ask you about your communication more generally. 1. Please describe any instances in which you and your partner talked about the app. 2. How has using the app been similar or different to other ways in which you commu- nicate with your partner? 3. Do you find you’re communicating similar or different things with the app versus other ways you communicate? 4. Did you ever use the app in conjunction with other communication channels, such as in conversation or with any of the apps you just mentioned? For example, sending an otter and then mentioning it in another app, or your partner receiving your otter and then commenting on it in another app. 5. (if mentioned) You mentioned in your daily surveys that your partner reacted to your otter outside of the app as well, for example, (describe their survey response). Could you describe this in more detail? 6. Have there been any changes to the way you communicate with your partner since the start of the study? Exit Interview Questions
General
1. What has your experience with the second version been like overall? 2. What did you think about the app update after the mid-study interview? 3. What was your understanding of what the update was? 4. Did you have any expectations for this update? 5. What did you think about the heart rate sensing?
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan . Did you notice any differences in the way the app worked? 7. Did you notice any differences in the way you used this version, compared to the first version? 8. Which version do you prefer? Why? 9. Aside from the app changing, was there anything different about your circumstances from the first half of the study (e.g., your location, your job, ways you and your partner communicated)? 10. There are lots of reasons why participation in the study may have changed since the update. For example, life circumstances, or general decline in the novelty or motivation to use the app. Did you notice any differences in your participation in the study, including your use of the app, as the study went on?
Notifications
1. What did you think about the notifications in the second half of the study? 2. How did they compare to the notifications in the first half?
Sharing Behavior
1. How did knowing (if they knew) that the app sensed your state affect the way you sent your otter to your partner, if at all? 2. You mentioned for V1, that you sent your state otter when (describe instances they sent them from the mid-study interview). How did this compare to the times you sent your state otter with V2? 3. You mentioned for V1, that you sent your state otter because (describe reasons they sent them from the mid-study interview) How did this compare to the reasons you sent your state otter with V2? 4. In our last interview, you talked about how you sometimes expected certain reactions from your partner, both within and outside of the app. How did your expectations for your partner’s reactions compare when you sent your sensed state otters in V2, if you had any? 5. Were there any times you saw a state otter that did match your mood that you decided not to send?
React Behavior
1. What did you think about seeing your partner’s sensed state otter in V2? 2. How did knowing (if they knew) that you were viewing your partner’s sensed state affect the way you reacted to your partner, if at all? 3. You mentioned for V1, that you sent a react otter when (describe instances/rea- sons for sending them described in mid-study interview). How did this compare to the times you reacted with react otters with V2? 4. You mentioned for V1, that you sent react otters that (describe the way they matched react otters in the mid-study interview). How did this compare to the way you reacted with react otters with V2?
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. . You mentioned for V1, that you reacted on (describe other platforms they used to react stated in the mid-study interview). How did this compare to the times you reacted to your partner in the second half of the study?
Communication Behavior
1. Did you and your partner talk about the new version of the app at all? How did you talk about it? What did you talk about? 2. You mentioned in our last interview that V1 of the app was (describe what they said about how it compared to the other ways they communicate in the mid-study interview). How did V2 compare to the way you communicated with V1? 3. Have there been any changes to the way you communicate with your partner in the last two weeks?
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:20)(cid:18)(cid:24)(cid:19) (cid:702)(cid:707)(cid:713)(cid:711)(cid:708) (cid:702)(cid:739)(cid:745)(cid:743)(cid:740)(cid:729)(cid:746)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739) (cid:718)(cid:740)(cid:746)(cid:3)(cid:748)(cid:734)(cid:737)(cid:737)(cid:3)(cid:743)(cid:726)(cid:745)(cid:730)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:728)(cid:733)(cid:726)(cid:743)(cid:726)(cid:728)(cid:745)(cid:730)(cid:743)(cid:734)(cid:744)(cid:745)(cid:734)(cid:728)(cid:744)(cid:3)(cid:740)(cid:731)(cid:3)(cid:678)(cid:684)(cid:3)(cid:729)(cid:734)(cid:731)(cid:731)(cid:730)(cid:743)(cid:730)(cid:739)(cid:745)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:3)(cid:731)(cid:740)(cid:743)(cid:3)(cid:726)(cid:3)(cid:739)(cid:730)(cid:748)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:734)(cid:739)(cid:732)(cid:3)(cid:726)(cid:741)(cid:741)(cid:3)(cid:731)(cid:740)(cid:743)(cid:728)(cid:740)(cid:746)(cid:741)(cid:737)(cid:730)(cid:744)(cid:675)(cid:3)(cid:713)(cid:733)(cid:734)(cid:744)(cid:3)(cid:744)(cid:746)(cid:743)(cid:747)(cid:730)(cid:750)(cid:3)(cid:744)(cid:733)(cid:740)(cid:746)(cid:737)(cid:729)(cid:3)(cid:745)(cid:726)(cid:736)(cid:730)(cid:3)(cid:726)(cid:727)(cid:740)(cid:746)(cid:745)(cid:3)(cid:678)(cid:682)(cid:674)(cid:679)(cid:677)(cid:3)(cid:738)(cid:734)(cid:739)(cid:746)(cid:745)(cid:730)(cid:744)(cid:675) (cid:702)(cid:707)(cid:712)(cid:713)(cid:711)(cid:714)(cid:696)(cid:713)(cid:702)(cid:708)(cid:707)(cid:712) (cid:695)(cid:726)(cid:728)(cid:736)(cid:732)(cid:743)(cid:740)(cid:746)(cid:739)(cid:729) (cid:756)(cid:716)(cid:730)(cid:3)(cid:726)(cid:743)(cid:730)(cid:3)(cid:728)(cid:743)(cid:730)(cid:726)(cid:745)(cid:734)(cid:739)(cid:732)(cid:3)(cid:726)(cid:3)(cid:739)(cid:730)(cid:748)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:734)(cid:739)(cid:732)(cid:3)(cid:726)(cid:741)(cid:741)(cid:3)(cid:731)(cid:740)(cid:743)(cid:3)(cid:728)(cid:740)(cid:746)(cid:741)(cid:737)(cid:730)(cid:744)(cid:673)(cid:3)(cid:734)(cid:739)(cid:3)(cid:748)(cid:733)(cid:734)(cid:728)(cid:733)(cid:3)(cid:746)(cid:744)(cid:730)(cid:743)(cid:744)(cid:3)(cid:728)(cid:726)(cid:739)(cid:3)(cid:744)(cid:730)(cid:739)(cid:729)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:726)(cid:744)(cid:3)(cid:744)(cid:745)(cid:726)(cid:739)(cid:729)(cid:726)(cid:737)(cid:740)(cid:739)(cid:730)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:744)(cid:3)(cid:745)(cid:740)(cid:3)(cid:730)(cid:726)(cid:728)(cid:733)(cid:3)(cid:740)(cid:745)(cid:733)(cid:730)(cid:743)(cid:675)(cid:3)(cid:701)(cid:730)(cid:743)(cid:730)(cid:3)(cid:726)(cid:743)(cid:730)(cid:3)(cid:744)(cid:740)(cid:738)(cid:730)(cid:3)(cid:730)(cid:749)(cid:726)(cid:738)(cid:741)(cid:737)(cid:730)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:744)(cid:3)(cid:726)(cid:3)(cid:746)(cid:744)(cid:730)(cid:743)(cid:673)(cid:743)(cid:730)(cid:741)(cid:743)(cid:730)(cid:744)(cid:730)(cid:739)(cid:745)(cid:730)(cid:729)(cid:3)(cid:727)(cid:750)(cid:3)(cid:726)(cid:3)(cid:737)(cid:734)(cid:732)(cid:733)(cid:745)(cid:3)(cid:727)(cid:743)(cid:740)(cid:748)(cid:739)(cid:3)(cid:740)(cid:745)(cid:745)(cid:730)(cid:743)(cid:673)(cid:3)(cid:738)(cid:734)(cid:732)(cid:733)(cid:745)(cid:3)(cid:744)(cid:730)(cid:739)(cid:729)(cid:687)(cid:756)
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:21)(cid:18)(cid:24)(cid:19) (cid:756) (cid:3) (cid:3)(cid:3)(cid:756) (cid:695)(cid:726)(cid:728)(cid:736)(cid:732)(cid:743)(cid:740)(cid:746)(cid:739)(cid:729) (cid:756)(cid:714)(cid:744)(cid:730)(cid:743)(cid:744)(cid:3)(cid:728)(cid:726)(cid:739)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:729)(cid:3)(cid:745)(cid:740)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:744)(cid:3)(cid:745)(cid:733)(cid:730)(cid:750)(cid:3)(cid:743)(cid:730)(cid:728)(cid:730)(cid:734)(cid:747)(cid:730)(cid:3)(cid:731)(cid:743)(cid:740)(cid:738)(cid:3)(cid:745)(cid:733)(cid:730)(cid:734)(cid:743)(cid:3)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:3)(cid:746)(cid:744)(cid:734)(cid:739)(cid:732)(cid:3) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737)(cid:742) (cid:673)(cid:754) (cid:699)(cid:740)(cid:743)(cid:3)(cid:730)(cid:749)(cid:726)(cid:738)(cid:741)(cid:737)(cid:730)(cid:673)(cid:3)(cid:709)(cid:730)(cid:743)(cid:744)(cid:740)(cid:739)(cid:3)(cid:694)(cid:3)(cid:669)(cid:743)(cid:730)(cid:741)(cid:743)(cid:730)(cid:744)(cid:730)(cid:739)(cid:745)(cid:730)(cid:729)(cid:3)(cid:727)(cid:750)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:737)(cid:734)(cid:732)(cid:733)(cid:745)(cid:3)(cid:727)(cid:743)(cid:740)(cid:748)(cid:739)(cid:3)(cid:740)(cid:745)(cid:745)(cid:730)(cid:743)(cid:670)(cid:3)(cid:738)(cid:726)(cid:750)(cid:3)(cid:744)(cid:730)(cid:739)(cid:729)(cid:3)(cid:745)(cid:733)(cid:730)(cid:731)(cid:740)(cid:737)(cid:737)(cid:740)(cid:748)(cid:734)(cid:739)(cid:732)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:745)(cid:740)(cid:3)(cid:745)(cid:733)(cid:730)(cid:734)(cid:743)(cid:3)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:673)(cid:3)(cid:709)(cid:730)(cid:743)(cid:744)(cid:740)(cid:739)(cid:3)(cid:695)(cid:687)(cid:756)(cid:709)(cid:730)(cid:743)(cid:744)(cid:740)(cid:739)(cid:3)(cid:694)(cid:668)(cid:744)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:22)(cid:18)(cid:24)(cid:19) (cid:756)(cid:709)(cid:730)(cid:743)(cid:744)(cid:740)(cid:739)(cid:3)(cid:695)(cid:3)(cid:669)(cid:743)(cid:730)(cid:741)(cid:743)(cid:730)(cid:744)(cid:730)(cid:739)(cid:745)(cid:730)(cid:729)(cid:3)(cid:727)(cid:750)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:729)(cid:726)(cid:743)(cid:736)(cid:3)(cid:727)(cid:743)(cid:740)(cid:748)(cid:739)(cid:3)(cid:740)(cid:745)(cid:745)(cid:730)(cid:743)(cid:670)(cid:3)(cid:738)(cid:726)(cid:750)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:729)(cid:3)(cid:748)(cid:734)(cid:745)(cid:733)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:731)(cid:740)(cid:737)(cid:737)(cid:740)(cid:748)(cid:734)(cid:739)(cid:732)(cid:3) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737) (cid:687) (cid:709)(cid:730)(cid:743)(cid:744)(cid:740)(cid:739)(cid:3)(cid:695)(cid:668)(cid:744)(cid:3)(cid:743)(cid:730)(cid:726)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739) (cid:702)(cid:739)(cid:744)(cid:745)(cid:743)(cid:746)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744) (cid:699)(cid:740)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:731)(cid:740)(cid:737)(cid:737)(cid:740)(cid:748)(cid:734)(cid:739)(cid:732)(cid:3)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:673)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:748)(cid:734)(cid:737)(cid:737)(cid:3)(cid:743)(cid:726)(cid:745)(cid:730)(cid:3)(cid:678)(cid:684)(cid:756) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737)(cid:3)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737)(cid:742) (cid:675)(cid:3)(cid:702)(cid:738)(cid:726)(cid:732)(cid:734)(cid:739)(cid:730)(cid:3)(cid:746)(cid:744)(cid:734)(cid:739)(cid:732)(cid:3)(cid:745)(cid:733)(cid:730)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:744)(cid:730)(cid:730)(cid:3)(cid:726)(cid:744)(cid:3)(cid:726)(cid:3)(cid:744)(cid:745)(cid:726)(cid:739)(cid:729)(cid:726)(cid:737)(cid:740)(cid:739)(cid:730)(cid:756) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737) (cid:756)(cid:745)(cid:740)(cid:3)(cid:726)(cid:739)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:730)(cid:729)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:726)(cid:3)(cid:728)(cid:737)(cid:740)(cid:744)(cid:730)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:3)(cid:744)(cid:730)(cid:739)(cid:745)(cid:3)(cid:669)(cid:730)(cid:675)(cid:732)(cid:675)(cid:673)(cid:3)(cid:744)(cid:734)(cid:732)(cid:739)(cid:734)(cid:990)(cid:728)(cid:726)(cid:739)(cid:745)(cid:3)(cid:740)(cid:745)(cid:733)(cid:730)(cid:743)(cid:673)(cid:3)(cid:727)(cid:730)(cid:744)(cid:745)(cid:3)(cid:731)(cid:743)(cid:734)(cid:730)(cid:739)(cid:729)(cid:673)(cid:3)(cid:731)(cid:726)(cid:738)(cid:734)(cid:737)(cid:750)(cid:3)(cid:738)(cid:730)(cid:738)(cid:727)(cid:730)(cid:743)(cid:670)(cid:673)(cid:3)(cid:726)(cid:744)(cid:3)(cid:729)(cid:730)(cid:744)(cid:728)(cid:743)(cid:734)(cid:727)(cid:730)(cid:729)(cid:3)(cid:740)(cid:739)(cid:3)(cid:745)(cid:733)(cid:730)(cid:741)(cid:743)(cid:730)(cid:747)(cid:734)(cid:740)(cid:746)(cid:744)(cid:3)(cid:741)(cid:726)(cid:732)(cid:730)(cid:675)(cid:3)(cid:702)(cid:739)(cid:3)(cid:744)(cid:740)(cid:738)(cid:730)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:673)(cid:3)(cid:750)(cid:740)(cid:746)(cid:743)(cid:3)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:3)(cid:748)(cid:734)(cid:737)(cid:737)(cid:3)(cid:727)(cid:730)(cid:3)(cid:743)(cid:730)(cid:741)(cid:743)(cid:730)(cid:744)(cid:730)(cid:739)(cid:745)(cid:730)(cid:729)(cid:3)(cid:727)(cid:750)(cid:3)(cid:726)(cid:3)(cid:737)(cid:734)(cid:732)(cid:733)(cid:745)(cid:3)(cid:727)(cid:743)(cid:740)(cid:748)(cid:739)(cid:740)(cid:745)(cid:745)(cid:730)(cid:743)(cid:675)(cid:3)(cid:709)(cid:737)(cid:730)(cid:726)(cid:744)(cid:730)(cid:3)(cid:726)(cid:739)(cid:744)(cid:748)(cid:730)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:3)(cid:727)(cid:726)(cid:744)(cid:730)(cid:729)(cid:3)(cid:740)(cid:739)(cid:3)(cid:750)(cid:740)(cid:746)(cid:743)(cid:3)(cid:990)(cid:743)(cid:744)(cid:745)(cid:3)(cid:734)(cid:739)(cid:744)(cid:745)(cid:734)(cid:739)(cid:728)(cid:745)(cid:675) (cid:707)(cid:730)(cid:746)(cid:745)(cid:743)(cid:726)(cid:737) (cid:694)(cid:739)(cid:744)(cid:748)(cid:730)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:731)(cid:740)(cid:737)(cid:737)(cid:740)(cid:748)(cid:734)(cid:739)(cid:732)(cid:3)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:3)(cid:731)(cid:740)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737)(cid:3)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737) (cid:673)(cid:3)(cid:744)(cid:730)(cid:739)(cid:745)(cid:3)(cid:734)(cid:739)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:744)(cid:730)(cid:3)(cid:745)(cid:740)(cid:3)(cid:726)(cid:3)(cid:728)(cid:737)(cid:740)(cid:744)(cid:730)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:673)(cid:3)(cid:727)(cid:730)(cid:737)(cid:740)(cid:748) (cid:687)
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:23)(cid:18)(cid:24)(cid:19) (cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:730)(cid:749)(cid:741)(cid:743)(cid:730)(cid:744)(cid:744)(cid:730)(cid:744)(cid:3)(cid:726)(cid:3) (cid:737)(cid:728)(cid:730)(cid:724)(cid:743)(cid:732)(cid:745)(cid:728) (cid:3)(cid:747)(cid:730)(cid:743)(cid:744)(cid:746)(cid:744)(cid:3) (cid:739)(cid:738)(cid:742)(cid:732)(cid:743)(cid:732)(cid:745)(cid:728) (cid:743)(cid:730)(cid:726)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739)(cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:730)(cid:749)(cid:741)(cid:743)(cid:730)(cid:744)(cid:744)(cid:730)(cid:744)(cid:3)(cid:726)(cid:3) (cid:735)(cid:738)(cid:746)(cid:3)(cid:728)(cid:737)(cid:728)(cid:741)(cid:730)(cid:748) (cid:3) (cid:747)(cid:730)(cid:743)(cid:744)(cid:746)(cid:744)(cid:3) (cid:731)(cid:732)(cid:730)(cid:731)(cid:3)(cid:728)(cid:737)(cid:728)(cid:741)(cid:730)(cid:748) (cid:743)(cid:730)(cid:726)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739)(cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:729)(cid:730)(cid:738)(cid:740)(cid:739)(cid:744)(cid:745)(cid:743)(cid:726)(cid:745)(cid:730)(cid:744)(cid:3) (cid:726)(cid:735)(cid:738)(cid:742)(cid:728)(cid:737)(cid:728)(cid:742)(cid:742)(cid:3)(cid:667)(cid:732)(cid:673)(cid:728)(cid:673)(cid:671)(cid:3)(cid:732)(cid:737)(cid:743)(cid:732)(cid:736)(cid:724)(cid:726)(cid:748)(cid:3)(cid:724)(cid:737)(cid:727)(cid:729)(cid:724)(cid:736)(cid:732)(cid:735)(cid:732)(cid:724)(cid:741)(cid:732)(cid:743)(cid:748)(cid:668)(cid:3)(cid:743)(cid:738)(cid:746)(cid:724)(cid:741)(cid:727)(cid:742)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692) (cid:707)(cid:730)(cid:732)(cid:726)(cid:745)(cid:734)(cid:747)(cid:730) (cid:756) (cid:709)(cid:740)(cid:744)(cid:734)(cid:745)(cid:734)(cid:747)(cid:730)(cid:705)(cid:740)(cid:748)(cid:3)(cid:730)(cid:739)(cid:730)(cid:743)(cid:732)(cid:750) (cid:756) (cid:701)(cid:734)(cid:732)(cid:733)(cid:3)(cid:730)(cid:739)(cid:730)(cid:743)(cid:732)(cid:750)
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:24)(cid:18)(cid:24)(cid:19) (cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:744)(cid:733)(cid:740)(cid:748)(cid:744)(cid:3) (cid:726)(cid:724)(cid:741)(cid:732)(cid:737)(cid:730) (cid:3) (cid:724)(cid:737)(cid:727) (cid:3) (cid:726)(cid:738)(cid:737)(cid:726)(cid:728)(cid:741)(cid:737) (cid:3) (cid:743)(cid:738)(cid:746)(cid:724)(cid:741)(cid:727)(cid:742)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:729)(cid:730)(cid:738)(cid:740)(cid:739)(cid:744)(cid:745)(cid:743)(cid:726)(cid:745)(cid:730)(cid:744)(cid:3) (cid:744)(cid:737)(cid:727)(cid:728)(cid:741)(cid:742)(cid:743)(cid:724)(cid:737)(cid:727)(cid:732)(cid:737)(cid:730) (cid:3) (cid:738)(cid:729)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741)(cid:990)(cid:742)(cid:3)(cid:729)(cid:728)(cid:728)(cid:735)(cid:732)(cid:737)(cid:730)(cid:742) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:743)(cid:730)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:744)(cid:3) (cid:729)(cid:738)(cid:735)(cid:735)(cid:738)(cid:746)(cid:672)(cid:744)(cid:739)(cid:3)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737) (cid:756) (cid:667)(cid:724)(cid:729)(cid:743)(cid:728)(cid:741)(cid:3)(cid:748)(cid:738)(cid:744)(cid:666)(cid:745)(cid:728)(cid:742)(cid:728)(cid:737)(cid:743)(cid:3)(cid:743)(cid:731)(cid:728)(cid:3)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737)(cid:668)(cid:3)(cid:729)(cid:741)(cid:738)(cid:736)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:744)(cid:733)(cid:740)(cid:748)(cid:744)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3) (cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741)(cid:3)(cid:732)(cid:742) (cid:3) (cid:745)(cid:724)(cid:735)(cid:744)(cid:728)(cid:727) (cid:692)(cid:709)(cid:737)(cid:730)(cid:726)(cid:744)(cid:730)(cid:3)(cid:748)(cid:743)(cid:734)(cid:745)(cid:730)(cid:3)(cid:740)(cid:739)(cid:730)(cid:3)(cid:741)(cid:740)(cid:744)(cid:744)(cid:734)(cid:727)(cid:737)(cid:730)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:728)(cid:740)(cid:739)(cid:747)(cid:730)(cid:750)(cid:744)(cid:3)(cid:734)(cid:739)(cid:3)(cid:748)(cid:740)(cid:743)(cid:729)(cid:744)(cid:675) (cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:25)(cid:18)(cid:24)(cid:19) (cid:702)(cid:738)(cid:726)(cid:732)(cid:734)(cid:739)(cid:730)(cid:3)(cid:726)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:743)(cid:730)(cid:728)(cid:730)(cid:734)(cid:747)(cid:730)(cid:729)(cid:756) (cid:729)(cid:741)(cid:738)(cid:736)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:754) (cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:748)(cid:740)(cid:746)(cid:737)(cid:729)(cid:3)(cid:741)(cid:743)(cid:740)(cid:738)(cid:741)(cid:745)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:745)(cid:740)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:729)(cid:748)(cid:734)(cid:745)(cid:733)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:675)(cid:3)(cid:709)(cid:737)(cid:730)(cid:726)(cid:744)(cid:730)(cid:3)(cid:729)(cid:730)(cid:744)(cid:728)(cid:743)(cid:734)(cid:727)(cid:730)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:675) (cid:701)(cid:726)(cid:739)(cid:729)(cid:3)(cid:701)(cid:740)(cid:737)(cid:729)(cid:734)(cid:739)(cid:732) (cid:694)(cid:739)(cid:744)(cid:748)(cid:730)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3)(cid:731)(cid:740)(cid:737)(cid:737)(cid:740)(cid:748)(cid:734)(cid:739)(cid:732)(cid:3)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:734)(cid:740)(cid:739)(cid:744)(cid:3)(cid:731)(cid:740)(cid:743)(cid:3)(cid:745)(cid:733)(cid:730)(cid:3) (cid:741)(cid:728)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737)(cid:3)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737) (cid:673)(cid:3)(cid:744)(cid:730)(cid:739)(cid:745)(cid:3)(cid:734)(cid:739)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:744)(cid:730)(cid:3)(cid:745)(cid:740)(cid:3)(cid:726)(cid:3)(cid:728)(cid:737)(cid:740)(cid:744)(cid:730)(cid:741)(cid:726)(cid:743)(cid:745)(cid:739)(cid:730)(cid:743)(cid:673)(cid:3)(cid:727)(cid:730)(cid:737)(cid:740)(cid:748) (cid:687)
Significant Otter: Understanding the Role of Biosignals in Communication CHI ’21, May 8–13, 2021, Yokohama, Japan (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:26)(cid:18)(cid:24)(cid:19) (cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:730)(cid:749)(cid:741)(cid:743)(cid:730)(cid:744)(cid:744)(cid:730)(cid:744)(cid:3)(cid:726)(cid:3) (cid:737)(cid:728)(cid:730)(cid:724)(cid:743)(cid:732)(cid:745)(cid:728) (cid:3)(cid:747)(cid:730)(cid:743)(cid:744)(cid:746)(cid:744)(cid:3) (cid:739)(cid:738)(cid:742)(cid:732)(cid:743)(cid:732)(cid:745)(cid:728) (cid:743)(cid:730)(cid:726)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739)(cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:730)(cid:749)(cid:741)(cid:743)(cid:730)(cid:744)(cid:744)(cid:730)(cid:744)(cid:3)(cid:726)(cid:3) (cid:735)(cid:738)(cid:746)(cid:3)(cid:728)(cid:737)(cid:728)(cid:741)(cid:730)(cid:748) (cid:3) (cid:747)(cid:730)(cid:743)(cid:744)(cid:746)(cid:744)(cid:3) (cid:731)(cid:732)(cid:730)(cid:731)(cid:3)(cid:728)(cid:737)(cid:728)(cid:741)(cid:730)(cid:748) (cid:743)(cid:730)(cid:726)(cid:728)(cid:745)(cid:734)(cid:740)(cid:739)(cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:729)(cid:730)(cid:738)(cid:740)(cid:739)(cid:744)(cid:745)(cid:743)(cid:726)(cid:745)(cid:730)(cid:744)(cid:3) (cid:726)(cid:735)(cid:738)(cid:742)(cid:728)(cid:737)(cid:728)(cid:742)(cid:742)(cid:3)(cid:667)(cid:732)(cid:673)(cid:728)(cid:673)(cid:671)(cid:3)(cid:732)(cid:737)(cid:743)(cid:732)(cid:736)(cid:724)(cid:726)(cid:748)(cid:3)(cid:724)(cid:737)(cid:727)(cid:729)(cid:724)(cid:736)(cid:732)(cid:735)(cid:732)(cid:724)(cid:741)(cid:732)(cid:743)(cid:748)(cid:668)(cid:3)(cid:743)(cid:738)(cid:746)(cid:724)(cid:741)(cid:727)(cid:742)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:744)(cid:733)(cid:740)(cid:748)(cid:744)(cid:3) (cid:726)(cid:724)(cid:741)(cid:732)(cid:737)(cid:730) (cid:3) (cid:724)(cid:737)(cid:727) (cid:3) (cid:726)(cid:738)(cid:737)(cid:726)(cid:728)(cid:741)(cid:737) (cid:3) (cid:743)(cid:738)(cid:746)(cid:724)(cid:741)(cid:727)(cid:742)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692) (cid:707)(cid:730)(cid:732)(cid:726)(cid:745)(cid:734)(cid:747)(cid:730) (cid:756) (cid:709)(cid:740)(cid:744)(cid:734)(cid:745)(cid:734)(cid:747)(cid:730)(cid:705)(cid:740)(cid:748)(cid:3)(cid:730)(cid:739)(cid:730)(cid:743)(cid:732)(cid:750) (cid:756) (cid:701)(cid:734)(cid:732)(cid:733)(cid:3)(cid:730)(cid:739)(cid:730)(cid:743)(cid:732)(cid:750)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)
CHI ’21, May 8–13, 2021, Yokohama, Japan Liu et al. (cid:18)(cid:21)(cid:26)(cid:18)(cid:21)(cid:19)(cid:21)(cid:19) (cid:52)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:3)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:3)(cid:54)(cid:82)(cid:73)(cid:87)(cid:90)(cid:68)(cid:85)(cid:72)(cid:75)(cid:87)(cid:87)(cid:83)(cid:86)(cid:29)(cid:18)(cid:18)(cid:86)(cid:81)(cid:68)(cid:83)(cid:75)(cid:70)(cid:76)(cid:17)(cid:70)(cid:82)(cid:20)(cid:17)(cid:84)(cid:88)(cid:68)(cid:79)(cid:87)(cid:85)(cid:76)(cid:70)(cid:86)(cid:17)(cid:70)(cid:82)(cid:80)(cid:18)(cid:52)(cid:18)(cid:40)(cid:71)(cid:76)(cid:87)(cid:54)(cid:72)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:18)(cid:37)(cid:79)(cid:82)(cid:70)(cid:78)(cid:86)(cid:18)(cid:36)(cid:77)(cid:68)(cid:91)(cid:18)(cid:42)(cid:72)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:51)(cid:85)(cid:72)(cid:89)(cid:76)(cid:72)(cid:90)(cid:34)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:54)(cid:88)(cid:85)(cid:89)(cid:72)(cid:92)(cid:44)(cid:39)(cid:32)(cid:54)(cid:57)(cid:66)(cid:27)(cid:88)(cid:48)(cid:92)(cid:68)(cid:21)(cid:20)(cid:45)(cid:49)(cid:41)(cid:19)(cid:44)(cid:42)(cid:49)(cid:73)(cid:9)(cid:38)(cid:82)(cid:81)(cid:87)(cid:72)(cid:91)(cid:87)(cid:47)(cid:76)(cid:69)(cid:85)(cid:68)(cid:85)(cid:92)(cid:44)(cid:39)(cid:32)(cid:56)(cid:53)(cid:66)(cid:23)(cid:49)(cid:48)(cid:86)(cid:81)(cid:22)(cid:171) (cid:27)(cid:18)(cid:24)(cid:19) (cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:729)(cid:730)(cid:738)(cid:740)(cid:739)(cid:744)(cid:745)(cid:743)(cid:726)(cid:745)(cid:730)(cid:744)(cid:3) (cid:744)(cid:737)(cid:727)(cid:728)(cid:741)(cid:742)(cid:743)(cid:724)(cid:737)(cid:727)(cid:732)(cid:737)(cid:730) (cid:3) (cid:738)(cid:729)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741)(cid:990)(cid:742)(cid:3)(cid:729)(cid:728)(cid:728)(cid:735)(cid:732)(cid:737)(cid:730)(cid:742) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:743)(cid:730)(cid:742)(cid:746)(cid:730)(cid:744)(cid:745)(cid:744)(cid:3) (cid:729)(cid:738)(cid:735)(cid:735)(cid:738)(cid:746)(cid:672)(cid:744)(cid:739)(cid:3)(cid:724)(cid:726)(cid:743)(cid:732)(cid:738)(cid:737) (cid:756) (cid:667)(cid:724)(cid:729)(cid:743)(cid:728)(cid:741)(cid:3)(cid:748)(cid:738)(cid:744)(cid:666)(cid:745)(cid:728)(cid:742)(cid:728)(cid:737)(cid:743)(cid:3)(cid:743)(cid:731)(cid:728)(cid:3)(cid:724)(cid:737)(cid:732)(cid:736)(cid:724)(cid:743)(cid:732)(cid:738)(cid:737)(cid:668)(cid:3)(cid:729)(cid:741)(cid:738)(cid:736)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:692)(cid:713)(cid:740)(cid:3)(cid:748)(cid:733)(cid:726)(cid:745)(cid:3)(cid:730)(cid:749)(cid:745)(cid:730)(cid:739)(cid:745)(cid:3)(cid:729)(cid:740)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:727)(cid:730)(cid:737)(cid:734)(cid:730)(cid:747)(cid:730)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:744)(cid:733)(cid:740)(cid:748)(cid:744)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3) (cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741)(cid:3)(cid:732)(cid:742) (cid:3) (cid:745)(cid:724)(cid:735)(cid:744)(cid:728)(cid:727) (cid:692)(cid:709)(cid:737)(cid:730)(cid:726)(cid:744)(cid:730)(cid:3)(cid:748)(cid:743)(cid:734)(cid:745)(cid:730)(cid:3)(cid:740)(cid:739)(cid:730)(cid:3)(cid:741)(cid:740)(cid:744)(cid:744)(cid:734)(cid:727)(cid:737)(cid:730)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:3)(cid:728)(cid:740)(cid:739)(cid:747)(cid:730)(cid:750)(cid:744)(cid:3)(cid:734)(cid:739)(cid:3)(cid:748)(cid:740)(cid:743)(cid:729)(cid:744)(cid:675)(cid:702)(cid:738)(cid:726)(cid:732)(cid:734)(cid:739)(cid:730)(cid:3)(cid:726)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:743)(cid:730)(cid:728)(cid:730)(cid:734)(cid:747)(cid:730)(cid:729)(cid:756) (cid:729)(cid:741)(cid:738)(cid:736)(cid:3)(cid:748)(cid:738)(cid:744)(cid:741)(cid:3)(cid:739)(cid:724)(cid:741)(cid:743)(cid:737)(cid:728)(cid:741) (cid:754) (cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:748)(cid:740)(cid:746)(cid:737)(cid:729)(cid:3)(cid:741)(cid:743)(cid:740)(cid:738)(cid:741)(cid:745)(cid:3)(cid:750)(cid:740)(cid:746)(cid:3)(cid:745)(cid:740)(cid:3)(cid:743)(cid:730)(cid:744)(cid:741)(cid:740)(cid:739)(cid:729)(cid:748)(cid:734)(cid:745)(cid:733)(cid:3)(cid:745)(cid:733)(cid:734)(cid:744)(cid:3)(cid:726)(cid:739)(cid:734)(cid:738)(cid:726)(cid:745)(cid:734)(cid:740)(cid:739)(cid:675)(cid:3)(cid:709)(cid:737)(cid:730)(cid:726)(cid:744)(cid:730)(cid:3)(cid:729)(cid:730)(cid:744)(cid:728)(cid:743)(cid:734)(cid:727)(cid:730)(cid:3)(cid:745)(cid:733)(cid:726)(cid:745)(cid:3)(cid:738)(cid:730)(cid:744)(cid:744)(cid:726)(cid:732)(cid:730)(cid:675) (cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)(cid:707)(cid:740)(cid:745)(cid:3)(cid:726)(cid:745)(cid:3)(cid:726)(cid:737)(cid:737) (cid:756) (cid:715)(cid:730)(cid:743)(cid:750)(cid:3)(cid:738)(cid:746)(cid:728)(cid:733)