Heart Rate Monitoring as an Easy Way to Increase Engagement in Human-Agent Interaction
HHeart Rate Monitoring as an Easy Way to Increase Engagement inHuman-Agent Interaction
Jérémy Frey , , Univ. Bordeaux, LaBRI, UMR 5800, F-33400 Talence, France CNRS, LaBRI, UMR 5800, F-33400 Talence, France INRIA, F-33400 Talence, [email protected]
Keywords:
HEART RATE , HUMAN - AGENT INTERACTION , SIMILARITY - ATTRACTION , ENGAGEMENT , SOCIAL PRES - ENCE
Abstract: Physiological sensors are gaining the attention of manufacturers and users. As denoted by devices such assmartwatches or the newly released Kinect 2 – which can covertly measure heartbeats – or by the popularityof smartphone apps that track heart rate during fitness activities. Soon, physiological monitoring could becomewidely accessible and transparent to users. We demonstrate how one could take advantage of this situationto increase users’ engagement and enhance user experience in human-agent interaction. We created an ex-perimental protocol involving embodied agents – “virtual avatars”. Those agents were displayed alongside abeating heart. We compared a condition in which this feedback was simply duplicating the heart rates of usersto another condition in which it was set to an average heart rate. Results suggest a superior social presence ofagents when they display feedback similar to users’ internal state. This physiological “similarity-attraction”effect may lead, with little effort, to a better acceptance of agents and robots by the general public.
Covert sensing of users’ physiological state is likelyto open new communication channels between humanand computers. When anthropomorphic characteris-tics are involved – as with embodied agents – mirror-ing such physiological cues could guide users’ prefer-ences in a cheap yet effective manner.One aspect of human-computer interaction (HCI),albeit difficult to account for, lies in users’ engage-ment. Engagement may be seen as a way to increaseperformance, as in the definition given by (Matthewset al., 2002) for task engagement: an “effortful striv-ing towards task goals”. In a broader acceptation, thenotion of engagement is also related to fun and ac-counts for the overall user experience (Mandryk et al.,2006). Several HCI components can be tuned to im-prove engagement. For example, content and chal-lenge need to be adapted and renewed to avoid bore-dom and maintain users in a state of flow (Berta et al.,2013). It is also possible to study interfaces: (Kar-lesky and Isbister, 2014) use tangible interactions insurrounding space to spur engagement and creativity.When the interaction encompasses embodied agents– either physically (i.e., robots) or not (on-screen avatars) – then anthropomorphic characteristics canbe involved to seek better human-agent connections.Following the affective computing outbreak (Pi-card, 1995), studies using agents that possess humanfeatures in order to respond to users with the ap-propriate emotions and behaviors began to emerge.(Prendinger et al., 2004) created an “empathic” agentthat serves as a companion during a job interview.While playing on empathy to engage users moredeeply into the simulation was conclusive, the dif-ficulty lies in the accurate recognition of emotions.Even using physiological sensors, as did the authorswith galvanic skin response and electromyography,no signal processing could yet reach an accuracy of100%, even on a reduced set of emotions – see (Lisettiand Nasoz, 2004) for a review.Humans are difficult to comprehend for computersand, still, humans are more attracted to others – hu-man or machine – that match their personalities (Leeand Nass, 2003). This finding is called “similarity-attraction” in (Lee and Nass, 2003) and was tested bythe authors by matching the parameters of a synthe-sized speech (e.g., paralinguistic cues) to users, when-ever they were introverted or extroverted. An analo-gous effect on social presence and engagement in HCI a r X i v : . [ c s . H C ] D ec as been described as well in (Reidsma et al., 2010),this time under the name of “synchrony” and focusingon nonverbal cues (e.g., gestures, choice of vocab-ulary, timing, . . . ). Unfortunately, being somewhatlinked to a theory of mind, such improvements leanagainst tedious measures, for instance psychologicaltests or recordings of users’ behaviors. What if thesimilarity-attraction could be effective with cues thatare much simpler and easier to set up?Indeed, at a lower level of information, (Slováket al., 2012) studied how the display of heart rate (HR)could impact social presence during human-humaninteraction. They showed that, without any furtherprocessing than the computation of an average heart-beat, users did report in various contexts being closeror more connected to the person with whom theyshared their HR. We wondered if a similar effect couldbe obtained between a human and a machine. More-over, we anticipated the rise of devices that couldcovertly measure physiological signals, such as theKinect 2, which can use its cameras (color and in-frared) to compute users’ HRs – the use of videofeeds to perform volumetric measurements of organsis dubbed as “photoplethysmography” (Kranjec et al.,2014).Consequently, we extended on the theory and wehypothesized that users would feel more connectedtoward an embodied agent if it displays a heartrate similar to theirs, even if users do not realizethat their own heart rates are being monitored . By relying on a simple mirroring of users’ phys-iology, we elude the need to test users’ personality(Lee and Nass, 2003) or to process – and eventu-ally fail to recognize – their internal state (Prendingeret al., 2004). Creating agents too much alike humansmay provoke rejection and deter engagement due tothe uncanny valley effect (MacDorman, 2005). Sincewe do not emphasize the link between users’ physi-ological cues and the feedback given by agents, wehope to prevent such negative effect. The similary-attraction applied to physiological data should work atan almost subconscious level. Furthermore, implicitfeedback makes it easier to improve an existing HCI.As a matter of fact, only the feedback associated withthe agent has to be added to the application; feedbackthat can then take a less anthropocentric form – e.g.,see (Harrison et al., 2012) for the multiple meanings ablinking light can convey and (Huppi et al., 2003) fora use case with breathing-like features. Ultimately,our hypothesis proved robust, it could benefit to virtu-ally any human-agent interaction, augmenting agent’ssocial presence, engaging users.The following sections describe an experimen-tal setup involving embodied agents that compares two within-subject conditions: one condition duringwhich agents display heartbeats replicating the HR ofthe users, and a second condition during which thedisplayed heartbeats are not linked to users. Our maincontribution is to show first evidence that displayingidentical heart rates makes users more engaged to-ward agents.
The main task of our HCI consisted in listening toembodied agents while they were speaking aloud sen-tences extracted from a text corpus, as inspired by(Lee and Nass, 2003). When an agent was on-screen,a beating heart was displayed below it and an au-dio recording of a heart pulse was played along each(fake) beat. This feedback constituted our first within-subject factor: either the displayed HR was identicalto the one of the subject (“human” condition), eitherit was set at an average HR (“medium” condition).The HR in the “medium” condition was ranging from66 to 74 BPM (beats per minute), which is the grandaverage for our studied population (Agelink et al.,2001).Agents possessed some random parameters: theirgender (male or female), their appearance (6 faces ofdifferent ethnic groups for each gender), their voice(2 voices for each gender) and the voice pitch. Thosevarious parameters aimed at concealing the true inde-pendent variable. Had we chosen a unique appearancefor all the agents, subjects could have sought whatwas differentiating them. By individualizing agentswe prevented subjects to discover that ultimately wemanipulated the HR feedback. To make agents lookmore alive, their eyes were sporadically blinking andtheir mouths were animated while the text-to-speechsystem was playing.In order to elicit bodily reactions, we chose sen-tences for which a particular valence has been as-sociated with, and, as such, that could span a widerange of emotions. Valence relates to the hedonic toneand varies from negative (e.g., sad) to positive (e.g.,happy) emotions (Picard, 1995). HR has a tendencyto increase when one is experiencing extreme pleas-antness, and to decrease when experiencing unpleas-antness (Winton et al., 1984).Our experiment was split in two parts (secondwithin-subject factor). During the first session, called“disruptive” session (see Figure 1), subjects had torate each sentence they heard on a 7-point Likert scaleaccording to valence they perceived (very unpleasantto very pleasant). Sentences came from newspapers.A valence (negative, neutral or positive) was ran- igure 1: Procedure during the “disruptive” session: sub-jects rate the valence of each one of the sentences spokenby an agent. After 4 sentences, they rate agent’s social pres-ence (3 items). Then a new agent appears. 20 agents, aver-age time per agent ≈ domly chosen every 2 sentences. Every 4 sentences,subjects had to rate the social presence of the agent.Then a new randomly generated agent appeared, fora total of 20 agents, 10 for each “human”/“medium”condition.As opposed to the first part, during the second partof the experiment, called “involving” session, sen-tences order was sequential (see Figure 2). Agentswere in turns narrating a fairy tale. Subjects did nothave to rate each sentence’s valence, instead they onlyrated the social presence of the agents. To match thelength of the story, agents were shuffled every 6 sen-tences and there were 23 agents in total, 12 for the“human” condition, 11 for the “medium” condition.Because of its distracting task and the nature ofits sentences, the first part was more likely to dis-rupt human-agent connection; while the second partwas more likely to involve subjects. This let us testthe influence of the relation between users and agentson the perception of HR feedback. We chose not torandomize sessions order because we estimated thatputting the “disruptive” session last would have madethe overall experiment too fatiguing for subjects. Ahigher level of vigilance was necessary to sustainits distracting task and series of unrelated sentences.Subjects’ cognitive resources were probably higher atthe beginning of the experiment. We created a 2 (HR feedback: “human” vs “medium” condition) x 2 (nature of the task: “dis-ruptive” vs “involving” session) within-subject ex-perimental plan. Hence our two hypothesis. H1 :Hear rate feedback replicating users’ physiology in-creases the social presence of agents. H2 : This effectis more pronounced during an interaction involvingmore deeply agents. Figure 2: Procedure during the “involving” session: sub-jects rate agent’s social presence after it recited all its sen-tences. Then a new agent appears, continuing the tale. 23agents, average time per agent ≈ Most of the elements we describe in this section,hardware or software, come from open source move-ments, for which we are grateful. Authors would alsolike to thank the artist who made freely available thegraphics on which agents are based . All code andmaterials related to the study are freely available athttps://github.com/jfrey-phd/2015_phycs_HR_code/. We chose to use a BVP (blood volume pulse) sen-sor to measure HR, employing the open hardware http://harridan.deviantart.com/ ulse Sensor (see Figure 3 for a closeup). It as-sesses blood flow variations by emitting a light ontothe skin and measuring back how fluctuates the inten-sity of the reflected light thanks to an ambient lightphoto sensor. Each heartbeat produces a characteris-tic signal. This technology is cheap and easy to im-plement. While it is less accurate than electrocardio-graphy (ECG) recordings, we found the HR measuresto be reliable enough for our purpose. Compared toECG, BVP sensors are less intrusive and quicker toinstall – i,e,. one sensor around a finger or on an ear-lobe instead of 2 or 3 electrodes on the chest. In addi-tion, as far as general knowledge is concerned, BVPsensors are less likely to point out the exact nature oftheir measures. This “fuzziness” is important for ourexperimental protocol, as we want to be as close aspossible to the real-life scenarios we foresee with de-vices such as the Kinect 2, where HR recordings willbe transparent to users.The BVP sensor was connected to an ArduinoDue (see Figure 3). Arduino boards have becomea well-established platform for electrical engineering.The Due model comes forward due to its 12 bits res-olution for operating analog sensors. The programuploaded into the Arduino Due was feeding the se-rial port with BVP values every 2ms, thus achieving a500Hz sampling rate. Figure 3: BVP (blood volume pulse) sensor measuringheartbeats, connected to an Arduino Due.
Two computers were used. One, a 14 inchesscreen laptop, was dedicated to the subject and ranthe human-agent interaction. This computer was alsoplugged to the Arduino board to accommodate sen-sor’s cable length. A second laptop was used by theexperimenter to monitor the experiment and to detectheartbeats. Computers were connected through anethernet cable (network latency was inferior to 1ms). http://pulsesensor.myshopify.com http://arduino.cc/ Computers were running Kubuntu 13.10 operat-ing system. The software on the client side was pro-grammed with Processing framework , version 2.2.1.Data acquired from the BVP sensor was streamed tothe local network with ser2sock . This serial port-to-TCP bridge software allowed us to reliably processand record data on our second computer. OpenViBE(Renard et al., 2010) version 0.18 was running on theexperimenter’s computer to process BVP.Within OpenViBE the BVP values were interpo-lated from 500 to 512Hz to ease computations. Thescript which received values from TCP was downsam-pling or oversampling packets’ content to ensure syn-chronization and decrease the risk of distorted signalsdue to network or computing latency. A 3Hz low-pass filter was applied to the acquired data in orderto eliminate artifacts. Then a derivative was com-puted. Since a heartbeat provokes a sudden variationof blood flow, a pulsation was detected when the sig-nal exceeded a certain threshold. This threshold wasset during installation: values too low could producefalse positives due to remaining noise, and values toohigh could skip heartbeats. Eventually a message wassent. See figure 4 for an overview of the signal pro-cessing. Figure 4: Signal processing of the BVP sensor with Open-ViBE. A low-pass filtered and a first-derivative are used todetect heartbeats.
Once the main program received a pulse message,it computed the HR from the delay between two beats.This value was passed over the engine handling theHR feedback during the “human” condition. We pur-posely created an indirection here – using BPM valuesin separate handlers instead of triggering a feedbackpulse as soon as a heartbeat was detected – in orderto suit our experimental protocol to devices that could https://github.com/nutechsoftware/ser2sock nly average HR over a longer time window (e.g., fit-ness HR monitor belts). It should be easier to replicateour results without the need to synchronize preciselyfeedback pulses with actual heartbeats.The TTS (text-to-speech) system comprised twoapplications. eSpeak was used to transform textualsentences into phonemes and MBROLA to synthe-size phonemes and produce an actual voice. The TTSspeed was controlled by eSpeak (120 word per min-utes), as well as the pitch (between 65 and 85, valueshigher than the baseline of 50 to match the teenageappearance of the agents). The four voices (2 maleand 2 female, “fr1” to “fr4”) were provided by theMBROLA project. Sentences’ valence did not influ-ence speech synthesis. During the first part of the experiment (i.e., the“disruptive” session) sentences were gathered fromarchives of a french-speaking newspaper. These datawere collated by (Bestgen et al., 2004). Sentenceswere anonymized, e.g., names of personalities werereplaced by generic first names. A panel of 10 judgesevaluated their emotional valence on a 7-point Lik-ert scale. The final scores were produced by averag-ing those 10 ratings. We split the sentences in threecategories: unpleasant (scores between [ − − [ , e.g.,a suspect was arrested for murder), neutral (between [ −
1; 1 ] ) and pleasant (between ]
1; 3 ] , e.g., the nationalsport team won a match) – see section 2.The sentences of the second part (i.e., the “in-volving” session) come from the TestAccord Emotiondatabase (Le Tallec et al., 2011). This database origi-nates from a fairy tale for children – see (Wright andMcCarthy, 2008) for an example of storytelling as anincentive for empathy. We did not utilize per se theassociated valences (average of a 5-point Likert scaleacross 27 judges for each sentence), but as an indi-cator it did help us to ensure the wide variety of thecarried emotions. For instance, deaths or bonding mo-ments are described during the course of the tale.It is worth noting that when the valence of thesecorpuses has been established, sentences were pre-sented in their textual form, not through a TTS sys-tem. The overall experiment took approximately 50minutes per subject. 10 French speaking subjects par-ticipated in the experiment; 5 males, 5 females, mean http://espeak.sourceforge.net/ http://tcts.fpms.ac.be/synthesis/mbrola.html age 30.3 (SD=8.2). The whole procedure comprisedthe following steps:1. Subjects were given an informed consent and ademographic questionnaire. While they filled theforms, the equipment was set up. Then we ex-plained to them the procedure of the experiment.We emphasized the importance of the distrac-tion task (i.e., to rate sentences’ valence) and ex-plained to the subjects that we were monitoringtheir physiological state, without further detailabout the exact measures. ≈ ≈ ≈ Figure 5: Our experimental setup. A BVP sensor connectssubject’s earlobe to the first laptop, where the human-agentinteraction takes place. Subject is wearing a headset to lis-ten to the speech synthesis. A second laptop is used by theexperimenter to monitor heartbeats detection.
4. We ran the experiment, as previously described.First the “disruptive” session (80 sentences, 20gents, ≈
22 min), then the “involving” session(138 sentences, 23 agents, ≈
17 min). We weremonitoring the data acquired from the BPV sen-sor and silently adjusted the hearbeat detectionthrough OpenViBE if needed – rarely, a big headmovement could slightly move the sensor andmodify signal amplitude. Figure 5 illustrates oursetup. ≈
40 min.The newspapers sentences being longer than theones forming the fairy tale, agents on-screen time var-ied between both parts. Agents mean display timeduring the first part was 62.2s, during the second partit was 46.6s.
We computed a score of social presence for eachagent, averaged from the 7-point Likert scales ques-tionnaires presented to the subjects before a new agentwere generated. This methodology was validated withspoken dialogue systems by (Möller et al., 2007).This score was composed of 3 items, consistent withITU guidelines (ITU, 2003). Translated to English,the items were: “Do you consider that the agent ispleasant?” (“very unpleasant” to “very pleasant”);“Do you think it is friendly?” (“not at all” to “veryfriendly”); “Did it seem ‘alive’?” (“not at all” to“much alive”).
We compared agents’ social presence scores be-tween the “human” and the “medium” conditions foreach part. Statistical analyses were performed with R3.0.1. The different scores were comprised between0 (negative) and 6 (positive), 3 corresponding to neu-tral.A Wilcoxon Signed-rank test showed a significantdifference (p < 0.05) during the “disruptive” session(means 3.29 vs vs vs ≈ In the course of the “disruptive” session our main hy-pothesis has been confirmed: users’ engagement to-ward our HCI increased when agents provided feed-back mirroring their physiological state. This resultcould not be explained by a preference for a certainpace of the HR feedback. For instance, even thoughtheir HRs were higher than average, subjects did notprefer agents of the “human” condition because offaster heartbeats. Some of them did possess HRslower than 70 BPM. The only other explanation liesin the difference of HR synchronization between “hu-man” and “medium” conditions.Beside agents’ social presence, similarity-attraction effect may influence the general mood ofsubjects, as they had a slight tendency to overratesentences valence during “human” condition. It isinteresting to note that while the increase in socialpresence scores is not huge (+13%), it shifts the itemsfrom slightly unpleasant to slightly pleasant.Maybe the effect would have been greater in a lessartificial situation. Indeed, despite our experimentalprotocol, subjects reported afterwards that the TTSsystem was sometimes hard to comprehend, whichbothered them on some occasions. It may have re-sulted in a task not involving enough for the subjectsto really “feel” the emotions carried by the sentences.Several reasons could explain why the effect ap-peared only during our “disruptive” session. Duringthe first session agents were displayed on a longer du-ration (+33%) because of the longer sentences usedin the newspapers. The attraction toward a mirroredfeedback could take time to occur. In addition, be-cause the task was less disruptive in the second ses-sion, subjects were more likely to focus their atten-tion on the content (i.e., the narrative) instead of theinterface (i.e., the feedback). This could explain whythey were less sensible to ambient cues. Subject wereless solicited during the “involving” session; we ob-served that between agents questionnaires they oftenremoved their hands from the mouse, leaning back onthe chair. Lastly, the “involving” session systemati-cally occurred in second position. Maybe the occur-rence of the similarity-attraction effect is correlated tothe degree of users’ vigilance.As for subjects’ awareness of the real goal ofthe study, during informal discussions after the ex-periments, most of them confirmed that they had noknowledge about the kind of physiological trait thesensor was recording, and none of them realized thatat some point they were exposed to their own HR.his increases the resemblance of our installationwith a setup where HR sensing occurs covertly.
We demonstrated how displaying physiological sig-nals close to users could impact positively social pres-ence of embodied agents. This approach of “ambient”feedback is easier to set up and less prone to errorsthan feedback as explicit as facial expressions. It doesnot require prior knowledge about users nor complexcomputations. For practical reasons we limited ourstudy to a virtual agent. We believe the similarity-attraction effect could be even more dramatic with physically embodied agents, namely robots. Thatsaid, other piece of hardware or components of anHCI could benefit from such approach. While its ap-pearance is not anthropomorphic, the robotic lamppresented by (Gerlinghaus et al., 2012) behaves likea sentient being. Augmenting it with physiologicalfeedback, moreover when correlated to users, is likelyto increase its presence.Further research is of course mandatory to con-firm and analyze how the similarity-attraction appliesto human-agent interaction and to physiological com-puting. The kind of feedback given to users need tobe studied. Are both audio and visual cues necessary?Does the look of the measured physiological signalneed to be obvious or could a heart pulse take the formof a blinking light? In human-human interaction suchquestions are more and more debated (Slovák et al.,2012);(Walmink et al., 2014). Obviously, one shouldcheck that a physiological feedback does not diminish user experience. (Lee et al., 2014) suggest it is not thecase, but the comparison should be made again withhuman-agent interaction.Various parameters in human-agent interactionneed to be examined to shape the limits of thesimilarity-attraction effect: exposure time to agents,nature of the task, involvement of users, and so on.Especially, we suspect the relation between humanand agent to be an important factor. Gaming settingsare good opportunities to try collaboration or antag-onism. Concerning users, some will perceive differ-ently the physiological feedback. As a matter of fact,interoception – the awareness of internal body states– varies from person to person and affects how wefeel toward others (Fukushima et al., 2011). It willbe beneficial to record finely users reactions, maybeby using the very same physiological sensors (Beckerand Prendinger, 2005).Finally, our findings should be replicated withother hardware. We used lightweight equipment to monitor HR, yet devices such as the Kinect 2 – if asreliable as BVP or ECG sensors – will enable remotesensing in the near future. But with the spread of de-vices that sense users’ physiological states, it is essen-tial not to forgo ethics.Measuring physiological signals such as HR en-ters the realm of privacy. Notably, physiological sen-sors can make accessible to others data unknown toself (Fairclough, 2014). Even though among a certainpopulation there is a trend toward the exposition ofprivate data, if no agreement is provided it is difficultto avoid a violation of intimacy. Users may feel theurge to publish online the performances associated totheir last run – including HR, as more and more prod-ucts that monitor it for fitness’ sake are sold – but ex-perimenters and developers have to remain cautious.Physiological sensors are becoming cheaper andsmaller, and hardware manufacturers are increasinglyinterested in embedding them in their products. Withsensors acceptance, smartwatches may tomorrow pro-vide a wide range of continuous physiological data,along with remote sensing through cameras. If users’rights and privacy are protected, this could providea wide range of areas for investigating and puttinginto practice the similarity-attraction effect. Heartrate, galvanic skin response, breathing, eye blinks: we“classify” events coming from the outside world andit influences our physiology. An agent that seamlesslyreacts like us, based on the outputs we produce our-selves, could drive users’ engagement.
REFERENCES
Agelink, M. W., Malessa, R., Baumann, B., Majewski, T.,Akila, F., Zeit, T., and Ziegler, D. (2001). Standard-ized tests of heart rate variability: normal ranges ob-tained from 309 healthy humans, and effects of age,gender, and heart rate.
Clinical Autonomic Research ,11(2):99–108.Becker, C. and Prendinger, H. (2005). Evaluating affectivefeedback of the 3D agent max in a competitive cardsgame. In
Affective Computing and Intelligent Interac-tion , pages 466–473.Berta, R., Bellotti, F., De Gloria, A., Pranantha, D.,and Schatten, C. (2013). Electroencephalogram andPhysiological Signal Analysis for Assessing Flow inGames.
IEEE Transactions on Computational Intelli-gence and AI in Games , 5(2):164–175.Bestgen, Y., Fairon, C., and Kerves, L. (2004). Un barome-tre affectif effectif: Corpus de référence et méth-ode pour déterminer la valence affective de phrases.
Journées internationales d’analyse statistique desdonnés textuelles (JADT) .Fairclough, S. H. (2014). Human Sensors - Perspectives onthe Digital Self. Keynote at Sensornet ’14.ukushima, H., Terasawa, Y., and Umeda, S. (2011). As-sociation between interoception and empathy: evi-dence from heartbeat-evoked brain potential.
Interna-tional journal of psychophysiology : official journal ofthe International Organization of Psychophysiology ,79(2):259–65.Gerlinghaus, F., Pierce, B., Metzler, T., Jowers, I., Shea, K.,and Cheng, G. (2012). Design and emotional expres-siveness of Gertie (An open hardware robotic desklamp).
IEEE RO-MAN ’12 , pages 1129–1134.Harrison, C., Horstman, J., Hsieh, G., and Hudson, S.(2012). Unlocking the expressivity of point lights.In
CHI ’12 , page 1683, New York, New York, USA.ACM Press.Huppi, B. Q., Stringer, C. J., Bell, J., and Capener, C. J.(2003). United States Patent 6658577: Breathing sta-tus LED indicator.ITU (2003). P. 851, Subjective Quality Evaluation of Tele-phone Services Based on Spoken Dialogue Systems.
International Telecommunication Union, Geneva .Karlesky, M. and Isbister, K. (2014). Designing for thePhysical Margins of Digital Workspaces: Fidget Wid-gets in Support of Productivity and Creativity. In
TEI’14 .Kranjec, J., Beguš, S., Geršak, G., and Drnovšek, J. (2014).Non-contact heart rate and heart rate variability mea-surements: A review.
Biomedical Signal Processingand Control , 13:102–112.Le Tallec, M., Antoine, J.-Y., Villaneau, J., and Duhaut, D.(2011). Affective interaction with a companion robotfor hospitalized children: a linguistically based modelfor emotion detection. In .Lee, K. M. and Nass, C. (2003). Designing social pres-ence of social actors in human computer interaction.In
Proceedings of the conference on Human factorsin computing systems - CHI ’03 , number 5, page 289,New York, New York, USA. ACM Press.Lee, M., Kim, K., Rho, H., and Kim, S. J. (2014). Empatalk. In
CHI EA ’14 , pages 1897–1902, New York,New York, USA. ACM Press.Lisetti, C. L. t. and Nasoz, F. (2004). Using NoninvasiveWearable Computers to Recognize Human Emotionsfrom Physiological Signals.
EURASIP J ADV SIG PR ,2004(11):1672–1687. MacDorman, K. (2005). Androids as an experimental appa-ratus: Why is there an uncanny valley and can we ex-ploit it.
CogSci-2005 workshop: toward social mech-anisms of android science , 3.Mandryk, R., Inkpen, K., and Calvert, T. (2006). Usingpsychophysiological techniques to measure user ex-perience with entertainment technologies.
Behaviour& Information Technology .Matthews, G., Campbell, S. E., Falconer, S., Joyner, L. a.,Huggins, J., Gilliland, K., Grier, R., and Warm, J. S.(2002). Fundamental dimensions of subjective state inperformance settings: Task engagement, distress, andworry.
Emotion , 2(4):315–340.Möller, S., Smeele, P., Boland, H., and Krebber, J. (2007).Evaluating spoken dialogue systems according to de-facto standards: A case study.
Computer Speech &Language , 21(1):26–53.Picard, R. W. (1995). Affective computing. Technical Re-port 321, MIT Media Laboratory.Prendinger, H., Dohi, H., and Wang, H. (2004). Em-pathic embodied interfaces: Addressing users’ affec-tive state. In
Affective Dialogue Systems , pages 53–64.Reidsma, D., Nijholt, A., Tschacher, W., and Ramseyer,F. (2010). Measuring Multimodal Synchrony forHuman-Computer Interaction. In , pages 67–71. IEEE.Renard, Y., Lotte, F., Gibert, G., Congedo, M., Maby, E.,Delannoy, V., Bertrand, O., and Lécuyer, A. (2010).OpenViBE: An Open-Source Software Platform toDesign, Test, and Use Brain–Computer Interfaces inReal and Virtual Environments.
Presence: Teleopera-tors and Virtual Environments , 19(1):35–53.Slovák, P., Janssen, J., and Fitzpatrick, G. (2012). Under-standing heart rate sharing: towards unpacking phys-iosocial space.
CHI ’12 , pages 859–868.Walmink, W., Wilde, D., and Mueller, F. F. (2014). Display-ing Heart Rate Data on a Bicycle Helmet to SupportSocial Exertion Experiences. In
TEI ’14 .Winton, W. M., Putnam, L. E., and Krauss, R. M. (1984).Facial and autonomic manifestations of the dimen-sional structure of emotion.
Journal of ExperimentalSocial Psychology , 20(3):195–216.Wright, P. and McCarthy, J. (2008). Empathy and experi-ence in HCI.