A Framework for Monitoring Human Physiological Response during Human Robot Collaborative Task
AA cce p t e d i n S M C A Framework for Monitoring Human Physiological Response duringHuman Robot Collaborative Task
Celal Savur Shitij Kumar Ferat Sahin Department of Electrical and Microelectronic EngineeringRochester Institute of TechnologyRochester, NY, 14623, USA { cs1323 , spk4422 , feseee } @rit.edu Abstract —In this paper, a framework for monitoring hu-man physiological response during Human-Robot Collaborative(HRC) task is presented. The framework highlights the impor-tance of generation of event markers related to both humanand robot, and also synchronization of data collected. Thisframework enables continuous data collection during an HRCtask when changing robot movements as a form of stimuli toinvoke a human physiological response. It also presents two casestudies based on this framework and a data visualization tool forrepresentation and easy analysis of the collected data during anHRC experiment.
Index Terms —Physiological Signals, Psycophisiology, Human-Robot Interaction, Collaborative Robots, Safety, Awareness,Digital-Twin, Physiological Computing
I. I
NTRODUCTION
The major challenges of any Human Robot Collaboration(HRC) in industry are human safety, human trust in automa-tion, and productivity [1]. Human safety has always been theprimary concern in robotics. One main aspect that concernssafety is injuries due to human-robot collision. Differentstrategies have been introduced to ensure human safety, oneis implementing physical and electronic safeguards accordingto industrial standards [2]. However, new strategies and ap-proaches are needed with human robot collaboration whereless standards are available to implement complex protectionschemes. Hence a new category of robots called collaborativerobots or cobots have been introduced in the market (e.g.Universal Robots, Kuka lbr-iiwa, Rethink Robotics Sawyer;to name a few). These robots are purposely designed to workin direct cooperation with humans in a defined workspace bylowering the severity and risks of injury due to collision.Human trust in automation is about managing human ex-pectations and how comfortable the human is sharing therobot workspace. Even though cobots decrease the risk ofinjury, any form of physical collision decreases the humantrust in automation. Thus, collision avoidance strategies suchas stopping or reducing speeds while human is in the operatingworkspace of the robot have been implemented [3] [2]. How-ever, the question arises how do we quantify human’s trust inautomation?In a human robot interaction setup, change in robot motioncan affect the human behavior. This was shown in experimentsdone in [4] and [5]. The literature review in [6] highlights the use of ‘psycophsiological’ methods to evaluate humanresponse and behavior during human robot interaction. Inour opinion, continuous monitoring of physiological signalsduring human-robot task is the first step in quantifying humantrust in automation. The inferences from these signals andincorporating them in real-time to affect robot motion canhelp in enhancing the human-robot interaction. Such a systemcapable of ‘physiological computing’ will result in a closedhuman-in-the-loop system where both human and robot inan HRC setup are monitored and information is shared. Thiscould result into better communication which would improvetrust in automation and increase productivity.Hence, in this work we propose a framework for a ‘phys-iological computing’ system to monitor human physiologicalresponses during a human-robot collaboration task. This paperhighlights the aspects and challenges of collecting human-physiological signals during a human-robot experiment. Itunderscores the importance of a controlled HRC experimentdesign, event marker generation related to both human androbot, and the synchronization of data collected. In orderto verify this framework, a prototype implementation of thesystem is shown as case studies of two HRC experiments.The first case study is an experiment to monitor the effectof change in robot acceleration and trajectory of motion onhuman physiological signals and determine a human comfortindex. In this experiment the human is sitting and sharingthe workspace with a UR 5e robot. The second experiment ismonitoring the human-behavior for different safety algorithmsduring human-robot collaborative task. This task is implemen-tation of a speed and separation monitoring setup where ahuman and a UR10 robot perform two separate tasks whilesharing a workspace [7]. Here, the human is not stationaryand moves in the workspace, which requires wireless dataacquisition of human physiological signals and representationof human-robot shared workspace. The final objective of thiswork is to generate a database that can be used to further the Psychophysiology is a branch of neuroscience that seeks to understandhow a persons mental state and physiological responses interact to affect oneanother. Physiological computing represents a category of ‘affective computing’that incorporates real-time software adaption to the psychophysiologicalactivity of the user. a r X i v : . [ c s . M A ] J u l cce p t e d i n S M C understanding of how human physiological responses can beinferred to result in adaptive robot motion behavior.The remainder of the paper is organized as follows: SectionII describes the proposed framework for creating a ‘physi-ological computing’ system to monitor human physiologicalresponses during a human-robot collaborative task. Based onthis framework two case studies are implemented in SectionIII and discussed in Section IV . Conclusions are drawn andthe future work mentioned in Section V.II. P ROPOSED A PPROACH
In this section the key aspects and challenges of monitoringhuman physiological response in Human Robot Collaborationare presented. Asking questions to human subject during or af-ter the experiment is common practice in human robot collab-oration and interaction experiment [6], [8]. These response ofthe subject allow researchers to quantify the subjective data ofthe experiment. However, such methods that interrupt subjectduring experiment may not be desirable for maintaining theintegrity of the desired physiological signals. In our opinion,an alternative approach would be a system which is able togenerate event markers automatically during experiment andenable the subject or the principle investigator to generatemarkers as the experiment is being performed. Then theseevent markers can be used during post processing by fieldexpert to identify response of the given input. In this way, itcould act as an alternative method to asking questions duringthe experiment.The block diagram of the proposed framework is shown inFigure 1. The proposed framework is a solution for concur-rently and continuously monitoring the state of human androbot during an HRC task. The framework from a systemsperspective can be conceptually categorized further into threesub modules:
Awareness , Intelligence and
Compliance [9]. Thecommunication layer between these sub-modules is equallyimportant as it is responsible for data transformation andsynchronization.The sub-module
Awareness is the perception of systemwhich is generated from the physical world sensors anddigital represented in the virtual world. The physical worldis responsible to sense the environment through the sensorinformation such as PPG sensor, GSR sensor, camera, motioncapture system etc. On the other hand virtual word is a digital-twin representation of the physical world that mimics theenvironment of the HRC task as well as the movements andbehavior of the robot and human agents [10]. The digitaltwin can be used to calculate metrics such as human-robotminimum distance, directed human-robot speeds, possiblecollisions and changes in trajectory [7] [11]. The virtual worldupdates its state constantly based on the sensory data receivedfrom the physical world to update itself and generate newdata for the framework. Overall
Awareness is responsible forsensing physical and virtual world and provide this data torest of the system. Such a setup helps digitally represent acombined human-robot state, which can then be associatedwith the human physiological state. The
Intelligence represents the control of robot actionsduring an HRC experiment. Programming experiment is partof the
Intelligence since it controls speed, acceleration, andtrajectory of the robot. The
Intelligence module processes thedata from the
Awareness module to generate event markersas well as robot actions that can be used as stimuli to elicithuman response. In addition to
Awareness it also receives inputfrom
Compliance module, which is a form of interpretationof human expectation. The
Intelligence module interprets thishuman command/feedback into actionable robot commands.Using human physiological signal as feedback to the robotor form of actionable control will help achieve a completehuman-in-the-loop closed loop system. Here, the
Compliance sub-module is responsible for inference from the physiologicalsignals or any form of commands from the human, that canbe used to modify the robot behavior. Thus achieving a higherlevel of
Compliance for the robot and managing the humanexpectation by interpreting the human physiological state canbe a gateway for a more interactive human robot collaboration.
Awareness , Intelligence , and
Compliance are the mainparts of the framework [9], however to integrate these threemodules a communication layer for data transformation andsynchronization is required. This is critical as many sensordevices and other systems do not have the same frequencyand timing clock. The communication layer is responsibleto transfer data in real time and also synchronize the datafrom different sources such as physiological signal collectiondevices, cameras, the representation of human-robot state inthe digital twin and robot state information.
Storage S i gna l T r an s po r t a t i on and S y n c h r on i z a t i on RecorderFeedback
Compliance Intelligence
Events Gen.Experiment
Awareness
SensorsPhysical WorldCameraRobot Info Robot TwinVirtual WorldHuman Twin
Environment
Prameters
Model LearningCompliance system
Figure 1. An overview block diagram of the proposed framework for mon-itoring Human Physiological Response during a Human Robot CollaborativeTask.
When designing human physiological signals related exper-iment the following aspects are critical. • Experiment design • Event markers generation • SynchronizationThe importance of these is elaborated in the following Sec-tions.
1) Experiment design:
When designing an experiment, theexperiment and its parameters need to well defined. The taskneed to be real or as realistic as possible to maintain theintegrity of the robot motion to act as stimuli to elicit thehuman physiological response. For example, an industrial tasks cce p t e d i n S M C is good option for the experiment. Hence the industrial taskmay improves the involvement of the subject sharing humanrobot collaboration workspace. In addition the task need to besimple and controlled to increase the repeatability of a human-robot interaction scenario. A complex task may result intomore uncertainty.
2) Event Marker generation:
The event markers generationis part of experiment design. In the experiment, importantevent need to be investigated and generated by the experiment.Having markers during experiment gives more intuition aboutexperiment, such as
Experiment Start/End , Task Start/End , Robot Coming towards Human , etc. The event markers helpto synchronize signal across different channels. For example,extracting Galvanic Skin Response and Heart Rate signalbetween ”Experiment Start” and ”Experiment End” is trivialwhen the event markers are present during signal recording.Thus, the markers can be used during post-processing forefficient data segmentation and epoching.
3) Synchronization:
Synchronization of signal from differ-ent sensors is crucial for the human physiological response.All the signal from human and robot need to be synchronizedwith event markers. Thus a central synchronization system isnecessary. In proposed framework for the physiological com-puting system, Lab Stream Layer (LSL) is used the interfacethe subsystems, which integrates data from all different devicesbeing used. The Lab Stream Layer is a system for collectionof time series data over a local network with built-in timesynchronization [12]. The LSL stream is nearly real-time andit is commonly used in biological signal collection systemsuch as OpenBCI, Pupil Lab, etc. Therefore, the LSL layeris selected as the central core of the data acquisition systemin proposed framework. In the framework, each device has anapplication node that is responsible to acquire signal from thedevice in real-time and pushing it to the LSL stream. A nodeis responsible to record all-time series data from LSL streaminto a local file for post-processing and analysis. Along withLSL, Robot Operating System (ROS) and ZeroMQ is used tomonitor data in real time during the experiment [9].III. C
ASE S TUDIES
A. Case Study I
The objective of the experiment is to monitor the effectof acceleration and trajectory of the robot on human phys-iological signals during collaborative task. The experimentwas performed using UR5e (Universal Robot) six degree offreedom (DoF) arm robot, as shown in Figure 2. The UR5e isa common collaborative robot with payload of 5 kg, whichis suitable for manufacturing environment and laboratories.The experiment is a simplfied version of an industrial task forloading inserts and unloading parts on at a plastic injectionmolding plant. This experiment represents a scenario wherethe human robot shared workspace is on a table and the humanis stationary.The experiment consists of four sub tasks which are tab-ulated in table I. In the experiment the max speed set 100degree/seconds so in case of collusion any injury or pain
Figure 2. A picture of an subject that preparing for experiment, and deviceplacement Table IT
HE TABLE SHOWS THE PARAMETERS OF THE EACH TASK
Acceleration TrajectoryTask 1
Normal Fixed
Task 2
High Fixed
Task 3
Normal Random
Task 4
High Random will be minimized. Since the maximum speed is fixed, theexperiment is designed to have different accelerations andtrajectories. In the experiment acceleration has two modes:fixed and random. The fixed mode indicates that robot hasfixed acceleration and random mode means the acceleration israndom.The trajectory has two modes: simple and random. Thesimple trajectory indicates there is no waypoint between pickand place waypoints and the motion is fluent and predictable.The random trajectory indicates multiple waypoints randomlyselected between pick and place waypoints. The Figure 3shows an example of the trajectory in random mode in whichrobot may take between Pick waypoint to Place waypoint orvice-versa. The trajectory planner will generate a trajectoryfrom randomly selected waypoints from each plane.Four type of tasks are performed by subjects. Each tasksconsists of two parts: loading inserts and unloading inserts.The subject is responsible for loading inserts on plate shownin Figure 2(top-left). There are two possible actions whichthe human can take during tasks. The first one is load theplate and wait for robot to unload all of the inserts from platethen re-load the plate. The second action is, to increase theproductivity, while robot is unloading inserts, load inserts thatare taken by the robot. The subject has freedom to choosewhichever action is comfortable.The robot is responsible for unloading i.e. picking all insertsfrom the plate and placing them into the container. In orderto control the start the unloading, the robot checks master cce p t e d i n S M C Place waypoint Pick waypoint
Side View
Figure 3. Figure shows how the robot selects the waypoint between pick andplace positions. Solid line shows a possible random trajectory and striped lineshows fixed trajectory for the robot. pin on plate every five seconds. This is helpful in generatingevent-marker representing the start of the task. If the masterpin has inserts then the plate is full and the robot starts theunloading process. It picks each item in order and place itinto a container. If there are no inserts on master pin, therobot goes to its home position wait for five seconds. Theexperiment setup and sensor placement can be seen in Figure2.
B. Case Study II
This experiment is monitoring the human-behavior fordifferent safety algorithms during human-robot collaborativetask. This task is implementation of a speed and separationmonitoring setup where a human and a UR10 robot performtwo separate but related tasks while sharing a workspace [7].Here, the human is not stationary and moves in the workspace,which requires wireless acquisition of human physiologicalsignals and representation of human-robot shared workspace.The experiment setup is a generic robot pick and place taskof placing 10 products in a box. The robot movement involvesmoving the base joint ◦ degrees between the pick and placepositions on the tables. The human has an assembly task forthreading a nut and a screw that are placed on the picking andplacing area. After threading the bolts and screws the humanputs the finished part on a table outside the robot workspace.This human task was setup to control the human movementand overlap of human-robot workspace. For more informationour previous work [3] and [7] can be referred. In order toavoid collision, safety algorithms are implemented to detectand anticipate the human motion, resulting into the robotstopping, reducing speed or moving normally i.e. maximumallowed speed for the task. The safety algorithms vary in termsof parameters such as critical human-robot separation distanceand what sensors are used to calculate the separation distance.This results into different robot motion behavior.The objective of this experiment is to monitor human phys-iological response and also see the overall task productivityduring this shared workspace task. During the experiment thesensors used to monitor the human are shown in Figure 4.Here the motion capture is used to monitor human motion, acamera is used to record the experiment, the human-gaze is tracked using Pupil Labs and human physiological responsessuch as pupil dilation, PPG, GSR, EEG & ECG recorded.A system diagram showing the data collection and monitor-ing is shown in Figure 5. The experiment setup is representedas a digital-twin in order to represent human and robot stateduring the experiment. This helps in generating the human-robot interaction state data such as human-robot separationdistance (minimum distance), human head orientation, humanpose and velocity and action representation. This data ismonitored and collected along with the human-physiologicalresponses. It is used to represent a combined human-robotstate of the ‘physiological computing’ system and analyse thestimulus and effect of human behavior during the experiment.In this system, the event markers used for case studies Iand II, the physiological signals that can be used and thecommunication and synchronization of data are discussed inthe following section. Galvanic SkinResponsePPG for HR andHRVMotion CaptureSystemCamera recordingSubject Pupil Dilation andGazeEEG&ECG G.tecNatilus
Figure 4. A motion capture is used to monitor human motion, a camera isused to record the experiment, the human-gaze is tracked using Pupil Labsand human physiological responses such as pupil dilation, PPG, GSR, EEG& ECG recorded.
IV. D
ISCUSSION
A. Event Marker
The auto generation of event markers during an HRCexperiment is critical. The choice of event markers dependson the experiment setup and the objective of the experiment.The biggest advantage of auto generation of event markers isthe experiment can be performed uninterrupted. These eventmarkers can be used to effectively post-process and analysethe data as data segmentation and epoching of the collectedsignals becomes easier. A list of events that are automaticallygenerated during the HRC task for Case Studies I and II arelisted in Table II.
B. Physiological Signals
In this Section, we list some of the human physiologicalsignals that have been used during human-robot experiments.The devices for collecting these signals have been success-fully interfaced in the implemented prototype system of theproposed framework.•
Electroencephalogram (EEG) is the method to record thebrain’s electrical activity via non-invasive electrode placed cce p t e d i n S M C Virtual World (Digital-Twin) Physical World
Robot informationExperimentEvent generation GSR signal streamPPG signal streamECG signal streamPupil signal and Gaze streamMinimum distanceHead orientationHuman pose and velocityHuman Action representation Lab Stream Layer (LSL)Camera Stream *notusing LSL layer
Data Visualization Tool for Real-Time Monitoring and Analysis
XDF File & ROS BagMonitoring Camera Gaze Tracking CameraRight Eye Left EyeDigital Twin
Left Eye Pupil DilationLeft Eye Pupil ConfidenceRight Eye Pupil DilationRight Eye Pupil ConfidenceGSR SignalPPG and Heart Beat SignalRobot Tool MomentumHuman-Robot Separation Distance
Figure 5. A system diagram representing the data collection and monitoring during the experiment as described in Case Study IITable IIT
HE TABLE SHOWS THE EVENT - MARKERS USED IN C ASE S TUDY
I & II
Event Marker Definition C a s e S t ud y I Experiment start Experiment startedTask [n] init nth task initialized but subject has notcomplete loading yetTask [n] start nth task started robot unloading all the partsTask [n] end nth task unloading is doneRobot approaching Each time robot comes toward human willgenerate a eventPick up successful Master pin is loadedPick up failed Master pin is not loadedExperiment end Experiment is complete C a s e S t ud y II Experiment start ExperimentRobot state change When robot change state between Normal,Reduced, and StopRobot is stopping When robot going to complete stopRobot is speeding up When robot is going to normal speedRobot is slowing down When robot is slowing down.Experiment end Experiment is complete on the human head. EEG has been used for error relatedpotentials, emotional valence scale and evoked potentials. Ithas also been used to detect alpha activity, which determinesattentiveness, stress, and other emotions. It can be questionedthat wearing an EEG cap while working can be uncomfort-able. However, it must be noted that in industry, workerscan wear helmets or hats. With the advent of advance, IoTsystems wireless communication and small size factor ofEEG equipment make it plausible to get such data. e.g., g.Tec,BioRadio, and openBCI.•
Electrocardiogram (ECG) measures the heart’s electrical activity. ECG can be used as a psychophysiological indicatorfor physical stress, mental stress and fatigue. In an industrialsetup, robot behavior can be adjusted based on the state ofhealth of the operator. This can help in avoiding injuries thatmay result from work exhaustion. [13].•
Electromyography (EMG) is method to record electricalactivity generated by muscles. EMG have been used as acontrol input for basic robot interaction. A sense of controlis very important for building the trust of human. Anotherexample of EMG is using facial muscles to give informationabout sudden emotional change or reaction. Placement ofthese can be in safety glasses worn by the operator [8], [14],[15] [16].•
Galvanic Skin Response (GSR) also known as Skin Con-ductivity (SC) or Electro Dermal Activity (EDA), measuresskin conductivity which is triggered by the central nervoussystem. This signal has been used in for emotion recognition,lie detector and detecting physical and mental stress [8], [13],[17], [18] [19].•
Heart Rate (HR) and Heart Rate Variability (HRV) is a signal that can be extracted from the ECG and alsophotoplethysmogram (PPG) signal. This information can givethe state of the person i.e. Resting or Active. HRV has beenused as a psychophysiological indicator.•
Pupil Dilation is a measurement of pupil diameter change.The pupil dilation can be caused by ambient light change inenvironment and emotional change. [20]. cce p t e d i n S M C C. Data Transfer and Signal Synchronization
The proposed framework in Figure 1 for monitoring humanresponse during Human Robot Collaborative task uses LSLlayer as the core for transportation and synchronization. UsingLSL layer as the core brings many advantages. The first andmost important reason is that it has built in time synchro-nization. In addition to synchronization, it allows developerto use external timer as well. The second most importantfeature is the LSL layer is operating system agnostic. Thisbring flexibility to the proposed framework, since there aresensor manufacturers have device drivers that supports onlycertain operating systems.Although LSL layer has the ability to record signal from thestream as an XDF file, the proposed framework uses ROSbagas an alternative for recording. Rosbag is a popular tool inrobotic application to record time-series data and replayingdata from collected bags. In addition, it has tools helps plottingthe stream from the bags. Hence it is selected as parallelrecording with LSL layer.Figure 5 shows proposed framework. In the Figure eachdevice has an application node which push data to LSL layer.Then LSL layer deliver data to two receivers, LabRecorder andLSL2Bag application which are responsible to record data into a file. V. C
ONCLUSION AND F UTURE WORK
In this research, a framework for monitoring and collectinghuman physiological response during human robot collabora-tive task is presented and a prototype implementation is shown.The challenges of data communication, signal synchroniza-tion and event markers are addressed and solution proposed.The implementation shows the synchronized and continuouscollection of human-robot states and human physiologicalresponses. This system is expandable for additional sensors.Although the framework designed for human robot collabora-tion task, it is not limited to this setup. Similar approach canbe taken for other ‘physiological computing’ systems.Future research will focus on developing a complete userinterface application of the ‘physiological computing’ systemfor processing of recording signals, extracting information andapplying machine-learning algorithm to provide feedback tothe robot. The final objective of this work is to generate adatabase that can be used to further the understanding of howhuman physiological responses can be inferred to result inadaptive robot motion behavior.A
CKNOWLEDGMENT
The authors would like to thank the Electrical EngineeringDepartment at RIT. The authors are grateful to the staffof Multi Agent Bio-Robotics Laboratory (MABL) and theCM Collaborative Robotics Research (CMCR) Lab for theirvaluable inputs. R
EFERENCES[1] S. Kumar and F. Sahin, “A framework for an adaptive human-robotcollaboration approach through perception-based real-time adjustmentsof robot behavior in industry,” in
System of Systems EngineeringConference (SoSE), 2017 12th , Oct. 2018, pp. 2850–2857.[4] D. Kulic and E. Croft, “Anxiety detection during human-robot inter-action,” , pp. 389–394, 2005.[5] D. Kul´ıc and E. Croft, “Affective state estimation for human-robotinteraction,” in
IEEE Transactions on Robotics , vol. 23, no. 5, 2007,pp. 991–1000. [Online]. Available: http://ieeexplore.ieee.org/document/4339537/[6] L. Tiberio, A. Cesta, and M. Belardinelli, “PsychophysiologicalMethods to Evaluate User’s Response in Human Robot Interaction: AReview and Feasibility Study,”
Robotics
Robotica , vol. 25, no. 1, pp. 13–27, 2007.[9] C. Savur, S. Kumar, S. Arora, T. Hazbar, and F. Sahin, “HRC-SoS:Human robot collaboration experimentation platform as system of sys-tems,” in , May 2019, pp. 206–211.[10] T. Cichon and J. Rossmann, “Simulation-based user interfaces for digitaltwins: Pre-, in-, or post-operational analysis and exploration of virtualtestbeds,” , pp. 365–372, 2017.[11] M. Safeea and P. Neto, “Minimum distance calculation using laserscanner and IMUs for safe human-robot interaction,”
Robotics andComputer-Integrated Manufacturing , vol. 58, pp. 33–42, Aug. 2019.[12] SCCN, “Lab Stream Layer (LSL),” 2018.[13] M. Ali, F. Al Machot, A. H. Mosa, M. Jdeed, E. Al Machot, andK. Kyamakya, “A globally generalized emotion recognition systeminvolving different physiological signals,”
Sensors (Switzerland) , vol. 18,no. 6, pp. 1–20, 2018.[14] K. Gouizi, F. Bereksi Reguig, and C. Maaoui, “Emotion recognition fromphysiological signals,”
Journal of Medical Engineering and Technology ,vol. 35, no. 6-7, pp. 300–307, 2011.[15] C. Savur and F. Sahin, “American Sign Language Recognition systemby using surface EMG signal,” ,pp. 2872–2877, 2017.[16] R. Chalapathy and S. Chawla, “Deep Learning for AnomalyDetection: A Survey,” pp. 1–50, 2019. [Online]. Available: http://arxiv.org/abs/1901.03407[17] S. Rohrmann, J. Hennig, and P. Netter, “Changing psychobiologicalstress reactions by manipulating cognitive processes,”
InternationalJournal of Psychophysiology , vol. 33, no. 2, pp. 149–161, 1999.[18] K. H. Kim, S. W. Bang, and S. R. Kim, “Emotion recognition systemusing short-term monitoring of physiological signals,”
Medical andBiological Engineering and Computing , vol. 42, no. 3, pp. 419–427,2004.[19] M. van Dooren, J. J. J. de Vries, and J. H. Janssen, “Emotional sweatingacross the body: Comparing 16 different skin conductance measurementlocations,”
Physiology and Behavior , vol. 106, no. 2, pp. 298–304, 2012.[Online]. Available: http://dx.doi.org/10.1016/j.physbeh.2012.01.020[20] P. Bonifacci, L. Desideri, and C. Ottaviani, “Familiarity of faces: Senseor feeling? An exploratory investigation with eye movements and skinconductance,”