A Joint Motion Model for Human-Like Robot-Human Handover
AA Joint Motion Model for Human-Like Robot-Human Handover
Robin Rasch , Sven Wachsmuth and Matthias K¨onig Abstract — In future, robots will be present in everyday life.The development of these supporting robots is a challenge.A fundamental task for assistance robots is to pick up andhand over objects to humans. By interacting with users, softfactors such as predictability, safety and reliability becomeimportant factors for development. Previous works show thatcollaboration with robots is more acceptable when robotsbehave and move human-like. In this paper, we present a motionmodel based on the motion profiles of individual joints. Thesemotion profiles are based on observations and measurements ofjoint movements in human-human handover. We implementedthis joint motion model (JMM) on a humanoid and a non-humanoidal industrial robot to show the movements to subjects.Particular attention was paid to the recognizability and humansimilarity of the movements. The results show that peopleare able to recognize human-like movements and perceive themovements of the JMM as more human-like compared to atraditional model. Furthermore, it turns out that the differencesbetween a linear joint space trajectory and JMM are morenoticeable in an industrial robot than in a humanoid robot.
I. INTRODUCTIONRobots will be commonplace in the future and are de-signed to improve our quality of life at work and at home.Robots and humans will solve tasks together and interactwith each other. These interactions can be handovers of anobject between humans and robots. Handovers and otherinteractions can lead to direct physical contact betweenhumans and robots. Therefore, it is important to take softfactors into account when developing robots that will operatein the personal space of humans. These soft factors are, forexample, the traceability of the robot and the ease of use.Above all, the feeling of safety is important for interactionsbetween humans and robots. For users, this feeling is re-stricted by the powerful movements of the robots and theassociated potential danger. This requires that human safety,but also the feeling for it, must be taken into account in thedevelopment of robots. In order for an assistant robot to beable to give such a feeling and not cause any damage wheninteracting with people, it is necessary to create familiarsituations for the user. This can be achieved by increasing theuser’s knowledge of the robot’s movements. Two approachescan be used for this: training a model for each user orimitating human movements and behaviors. *This work is financially supported by the German Federal Ministry ofEducation and Research (BMBF, Funding number: 03FH006PX5). Robin Rasch and Matthias K¨onig are with Faculty Campus Min-den, Bielefeld University of Applied Sciences, 32427 Minden, Germany [email protected] Sven Wachsmuth is with the Central Lab Facilities, Cognitive In-teraction Technology Excellence Cluster, Bielefeld University, Germany. [email protected]
As a starting point for our research, we assumed a basicscenario in which the robot fetches an object and delivers itto the user. Various research questions arise for this scenario:How does the robot know when to start handover, so that theuser is able to react? In which pose should the robot placethe object? How could the robot react to man’s dynamicmovements without colliding? When should the robot releasethe object so that it does not fall down or the user has to pullit? And how should the robot move so that the movement issafe and comfortable for the user? To sum it up, how can asingle task for an autonomous robot working in a person’spersonal space be developed, which gives the user a feelingof safety and takes comfort and predictability into account.One aspect to increase the user’s feeling of safety duringinteraction with robots is to make the movements knownand predictable for the user. This can be made possibleby transferring the movements of human-human interactionsto robot-human interactions. The more the movements ofa robot resemble those of a human being, the higher thefeeling of safety [1], [2]. To achieve more robust and saferhuman-robot interactions, it is a goal to expand and discussthe knowledge about how humans interact with each other.The movements during the transfer of objects have alreadybeen investigated [3], [4]. It turned out that people havedifferent movement sequences for transferring objects de-pending on the situation and other parameters. Situations aredetermined by the combination of standing, sitting or movinginteraction partners. One of the studies that dealt with thistopic was Huber et al. [3]. A trajectory model for the wristwas developed from an experiment with two people sittingopposite each other. The model, which is approached in thispaper, is based on the transfer of two standing interactionpartners. The object is small and light and can be carriedand transferred with one hand. This can be a small bottle ora pencil, for example. The study of our previous work [4]served as the basis for the joint motion model (JMM). The re-sults of the study are a trajectory model for the wrist, but alsoa general movement pattern of the other joints in the arm.However, a trajectory model has several disadvantages forimplementation on a robot. On the one hand, the trajectory isvery dependent on the kinematics or inverse kinematics of therobot. On the other hand, the number of degrees of freedomcan lead to joint configurations that look very inhuman, forexample an overstretched elbow joint. For this reason, ourJMM used a model for each individual joint of the generalmovement sequence. An illustrative video can be found underthe link . https://youtu.be/oCd1sDV3PAs a r X i v : . [ c s . R O ] A ug I. RELATED WORKThe question of how to make the handover of objectsbetween robots and humans more human-like has alreadybeen investigated in various works. Our hypothesis that wecould generate more human-like movements with a jointmovement model was derived from these aspects: 1) thegeneral approaches and processes of an object transfer, and2) the movement models that exist for handover. The stateof the art in these aspects is explained below.
A. Handover Process
Strabala et al. [5] provide an overview of the temporalsequence of a transfer. This sequence can be divided intothree steps. After the initial transport or carry phase, theintention phase follows. In this phase, different signals aresent out by one of the interaction partners, which trigger theactual transfer phase. These signals can be facial expressions,gestures or verbal communication.The transport phase can vary in length. If the object isfurther away or if it is in another room, a path planner isalso required. This planner must be aware of the humaninteraction partner to maintain the feeling of safety duringthis phase. Different approaches show how the object canbe transported in a safe and user-friendly way [6], [7]. Thetransport phase also affects the intention phase. An intentioncan be for example if a handing over person signals readinessby means of a gesture in which the person remains in asuitable position and extends the hands [8]. The gestures andfacial expressions are also part of the next phase. Grigore etal. [9] built a behavioral model for robots that takes intoaccount the orientation of the head and the view of theinteraction partner during a handover. Some of the basicresearch projects [8], [10] of the transfer phase highlightsa general process: 1) carrying, 2) coordinating and 3) objectexchange. During the exchange phase, it is important for therobot to release the object at the moment of exchange. Thismoment is described by various factors. Some controllersfocus on the facial expressions and gestures of the interactionpartner [9], [11], other controllers uses an approach todetermine a stable grip. For this purpose, different sensorsystems and algorithms or learning methods are used todetermine this grasp in different works [12], [13], [14].In the coordination phase, various parameters of handoverare negotiated between the interaction partners, such as thehandover position based on various criteria such as field ofview, security and accessibility [15], [16]. Here the interac-tion is also determined by gestures and movements. Anotherfactor of this interaction is the degree of synchronization.In contrast to the hand shake [17], where synchronizationdetermines the oscillating process, a handover requires aninterpersonal synchrony [18]. Our approach deals with thecarrying phase. In this position, the object is moved to thetransfer position. In contrast to the transport phase, thismovement is short and is usually carried out with the armsonly.
B. Motion Models
Human-human studies have shown that the object is notmoved along a linear trajectory during the carrying phase.Initial parts of a movement indicate the intention to handover [5]. The remaining movement sequence is used toposition the object. The movement is examined for differentscenarios and implemented with different models and meth-ods. One of the first studies was Shibata et al. [1], whichshowed that a trajectory of a handover between two seatedpersons is not linear but follows a general pattern.An implemented planner is the trajectory planner fromHuber et al. [3], which is based on the minimum jerkapproach and has been further developed to the decoupledminimum jerk trajectory generator. The validation with ahuman-robot study shows the advantages of such a planner.Our previous work [4] was based on this planner. In contrastto Huber et al., the probants were not seated during the study,but stood opposite each other. The object was a light objectthat can be held in one hand. The results of this study were atrajectory model for the wrist and a general movement modelconsisting of movement primitives. Fig. 1 shows the motionprimitives that are executed in parallel or sequentially. Thetotal movement consists of a flexion of the shoulder (Fig.1a), which together with an adduction (Fig. 1b) leads toa partial circumduction. A flexion of the elbow (Fig. 1c)extends the arm towards the receiver. Finally, there are twovariations referring to the rotation of the forearm, shownin Fig. 1d. In the first variant, the closed palm is turneddownwards by pronation of the forearm. In variant two, thepalm of the hand is turned upwards by a supination. Furthershort movements, e. g. an abduction of the shoulder at thebeginning of the movement, can also be noticed during ahandover. Based on this movement model, our paper presentsa detailed movement model for the individual joints.Different techniques have been used to create a movementmodel. One variant is to imitate the movement of people. Forthis purpose, the movement is recorded by various sensorsand represented by mathematical or logical models. Thesemodels are optimized on the basis of the data accordingto defined criteria. Common sensor approaches have beencamera-based approaches or magnetic field-based tracking.Inertial measurement unit sensors were used in our previousstudy. The criteria according to which a model is optimizedare versatile, e. g. minimum-variance [19] or minimum-jerk [20]. The trajectory model in [4] was based on a five-degree polynomial and was optimized after the minimaldistance to the trajectories of human-human handover.III. JOINT MOTION MODELWe considered the task of handing over an object toa human being with a robot. The robot should move assimilarly as possible to humans to maintain the feeling ofsafety. We assumed that the object is light and small, so itcould be carried and held with one hand. In addition, theobject was rigid and the physical properties were known.The robot also had the object in hand and came fromthe transport phase. In our scenario, we did not consider a) Shoulder flexion (red) andextension (blue) (b) Shoulder abduction (red)and adduction (blue) (c) Elbow extension (red) andflexion (blue) (d) Forearm motions: supina-tion (red) and pronation (blue)
Fig. 1: The motion primitives that are used during a transfer and their countermovements are executed by a Pepper robot.The combination of shoulder movements is also known as circumduction.coordination issues such as handover poses, stability, orsignaling problems.We were interested in making the movement as similaras possible to human. In doing so, we did not focus on ahuman-like trajectory, as in previous works, but on a human-like movement pattern. The joint motion model was based ontime-angle functions, whereby for each primitive movementof the handover a function was determined. To define andcalculate the motion functions, the movements of peoplewere analyzed during handovers with video data.Furthermore, we were interested in the subjective percep-tion of humans for the movements of the robot. Could aperson perceive changes in the movements? Did the sub-ject notice a human likeness? Did anything change in thestatements when you point out differences? To answer thesequestions, we implemented the motion model on two robotsand interviewed test persons.
A. Motion Analysis
The video data were created on the basis of two experi-ments. The first dataset was created from the human-humanstudy from a previous study [4]. Here, 150 handovers werecarried out by 26 people. The subjects were aged between 16and 49 years. The camera was positioned to the right behindthe giving subject. In contrast to the previous study, anothervideo process was used to analyze the data. For this we useda technology that was already used by Sprute et al. [21]for the analysis of gestures. Our analysis process is basedon OpenPose and Convolutional Pose Machines (CPM) [22]and is depicted in Fig. 2. First, our implementation for ROSsynchronized RGB and depth images. CPM then detectedthe keypoints on the RGB image and estimated the bodyjoints, as shown in Fig. 3. The points were matched withthe depth image to determine the 3D pose of the joints. Theangles of the movement primitives were then calculated andrecorded. Since OpenPose does a single-frame computation,an additional tracking algorithm had been implemented totrack the right person on the image.Since the RGB-D camera only had a frame rate of 30frames per second, a second data set was recorded. Asimple RGB camera with a shooting rate of 100 FPS anda resolution of 1920x1080 pixels was used. Since it did notcollect 3D data, the analysis process and scenario settings Fig. 2: Analysis process for joint angles from RGB-D videodata.had been adjusted. At first, the angle of view was setperpendicular to the transfer. In this way, further informationon the flexion/extension of the shoulder and elbow could becollected. Data on shoulder abduction or adduction couldnot be acquired from this perspective. The lack of depthinformation also eliminated the steps of synchronizationand matching of the process chain. The tracking processremained, as both test subjects were still in the picture. Theadvantage of this recording method was the higher resolution,both in terms of image and time. The recorded second dataset was considerably smaller than the first one. It included42 handovers of seven test persons aged between 25 and 33years.To determine a suitable movement model for the joints,several variable factors had to be deducted. The total timeand thus the speed of the movements was standardized forthe analysis. Since the persons determined the end positionsthemselves during the handover and human kinematics dif-fered from person to person, the angles were consideredboth normalized and non-standardized. The data were alsosmoothed with a Savitzky-Golay filter to compensate for thediscrepancies in CPM estimates.The analysis of the data showed different motion profilesfor the different joints. Since the different movements for theig. 3: Sample image of the video analysis with the RBGcamera. The left person hands over an object with hisright hand. The positions of the joints and limbs have beendetected and highlighted with OpenPose.forearm were already examined in more detail in the previouswork, the analysis of the elbow and shoulder is described inmore detail in the following. First, the elbow flexion andextension is discussed.The analysis showed two different variations of the motionsequence, which are illustrated and differentiated in Fig. 4.One variant V showed a pronounced sub-motion, whichwas weak or non-existent in the other variant V . The basicprocedure was the same for both variants. The angle wasreduced by a flexion of the elbow. This raised the hand orend effector to a higher position. This mean that the shoulderjoint did not cover the entire height range. In V , shown in theillustrations in red, the flexion was more pronounced at thebeginning. The transferor drew the object closer to his body.To compensate this movement, an additional extension of theelbow followed at the end. This caused the object to movein the direction of the receiver. This additional movementis clearly visible in Fig. 4a by the global minimum of themovement to the middle of the execution. While V generallydescended monotonously, V did not show this property. Theevaluation also showed that the movements of V end up witha higher, i. e. stretched out, arm. This becomes visible in thestandardized view in Fig. 4b. Here it also becomes obviousthat the movements of V more scattered in their relativeend positions. The absolute end-position showed similarbehavior, as shown by the mean values in Fig. 4a. A furthercharacteristic was that the flexion in V reaches the largerangle (avg. V : 18 ◦ , V : 22 ◦ ), while in V the minimumangle was reached relatively earlier. The distribution betweenthe two variants was approximately the same. This could notbe determined exactly, since the extension occurs to varyingdegrees. As a result, the transition between the two variantscould not be determined unambiguously.In contrast to the movement of the elbow joint, the move-ments of the shoulder joints are very uniform, see Fig. 5.The flexion of the shoulder starts slowly before reaching themaximum speed. Towards the end, the movement deceleratesslowly. The amplitude of the movement is between 45and 55 degrees. The monotonous gradient shows that the Time A ng l e [ ° ] Variant 1Variant 2Overall MeanVariant 1 MeanVariant 2 Mean (a) Only-time standardized elbow angles.
Time A ng l e Variant 1Variant 2Overall MeanVariant 1 MeanVariant 2 Mean (b) Time and ampiluted standardized elbowangles.
Fig. 4: The totality of all recorded elbow angles movementprofiles. The two variants are marked by one color (blue andred). In addition, the overall average (black) and the averagesfor both variants (bold) are shown.movement consists only of a flexion and not an extension.This movement leads to the elevation of the object into theend position, whereby the position height also depends onthe angle of the elbow.The analysis of the adduction and abduction could onlybe carried out with images from the depth camera. It turnedout that the profile was similar to flexion. The performedadduction guided the object in the horizontal orientation,whereby the transmitter moved the object in the directionof its own centre of the body and thus in the direction of thereceiver. For both shoulder movements, their end positionsdepended on the end position of the handover.
Time -20-1001020304050 A ng l e [ ° ] AllMean
Fig. 5: All recorded shoulder flexion and their mean value. . Motion Model
To transfer the results of the analysis into a model forrobots, our approach determined a motion function for eachprimitive movement. The characteristic of the function wasbased on the analysis of the data. For this purpose, a func-tions were assumed and the coefficients for the individualprimitive motions were calculated by approximating themean values of the analysis. By normalizing the data interms of amplitude and time, the motion functions can beused for different speeds and start-, as well as end poses.But since the models also depends on the kinematics of therobot, additional constants were added. To adapt the modelsto the data as accurately as possible, a standard mathematicalfunction was selected for each motion primitive, whichmatches the characteristics of the analysis data.Due to the shape of a gooseneck, a sigmoid function (1)was chosen for the movement function of the shoulder.Compared to a polynomial function, it has the advantageto run at the edges against the constraints. f ( x ) = ab + e ( − c ∗ x ) (1)The coefficients a = 0.000905, b = 0.0008908 and c = 12.87were determined by fitting the standardized data against timeusing the Levenberg-Marquardt algorithm. A sum of squarederrors of SSE = 0 . and a coefficient of determination of R = 0 . were achieved. With the additional parametersand robot constraints, the movement model for the joint angle J S resulted: J S ( t ) = a ∗ ( j e − j ) ∗ r c b + e − c ∗ tte + j , (2)where j denotes the start angle of the joint at start and j e theend angle of the joint. r c describes a damping factor, whichis restricted by the robot kinematics and adjusted to it. Thetime t , or rather the speed, is expressed by the term tt e andcan be viewed relatively by looking at the end time t e .For the motion function of elbow flexion, a seven-degreepolynomial function (3) was selected to cover the character-istics of both variants. f ( x ) = c x + c x + . . . + c x + c (3)Instead of determining the coefficients of the mean value,the coefficients c, c , . . . , c for the mean values of the twovariants were determined. To determine the mean values, therecorded data were divided into the two variants. The timeposition of the global minimum and the difference to theend of the movement were considered as criteria. By usinga polynomial to map the analysis data, the resulting functionis only valid within the limits [0 , . The coefficients for thevariants are shown in Table I. Variant one achieves SSE =0 . and R = 0 . . Variant two has SSE = 0 . and R = 0 . with respect to the analysis data. Using theprevious constants, parameters and limitations, this resultedin the function for the movement model of the elbow joint: J E ( t ) = j + ( j − j e ) r c ( c ( tt e ) + c ( tt e ) + . . . + c ) , (4) Var. c c c c c c c TABLE I: Polynomial coefficients for both variants of elbowmotions. c = 1 . for both variants.where r c , j , j e have the same meaning but are not identi-cal to (2). For the further primitive movements, the sameapproaches were used to determine the motion functions. Asigmoid function was also used as the basis for adduction,while a polynomial was used for pronation or supination. C. Implementation of Model
For the implementation of this model, it was necessaryto control the joints as well as mapping the robot joints tothe primitive movements. This is easy for humanoid robotsas long as there are enough degrees of freedom. In ourexample this was done with a Pepper robot. In this case,the movements could be assigned directly to the joints.Only the pronation/supination of the forearm was representedby a rotation of the wrist. The robot constants r c wereconfigured by start and end positions of handover. Basedon kinematic properties (ratio between lower arm and upperarm) or limitations (tablet computer in front of the chest),the constant was used as stretching or damping factor. Thedifferent movements of the Pepper robot are shown in Fig.6. The differences in movement are visible in the deflectionof the elbow joint in the side view and the trajectory of thehand in the front view. (a) Linear joint space trajectory (b) Joint motion model(c) Linear joint space trajectory (d) Joint motion model Fig. 6: Image series of the two motion models executed byPepper from different views.In our second application case, the motion pattern wasmplemented on an under-actuated robot manipulator, a KukaYoubot. In this case, the mapping of the primitive movementsto the joints was more complex, because the robot has onlyfive degrees of freedom and the joints are not arrangedin a human-like way. Since the first and last joint of themanipulator are twisting joints, they were mapped to thetwo twisting movement primitives. Joint 1 thus correspondedto the shoulder adduction and joint 5 to the pronation andsupination of the forearm. The two flexions were representedby the rotating joints of the robot. Joint 3 represented theflexion of the shoulder and joint 4 the flexion and extensionof the elbow. Fig. 7 shows the manipulator with both motionmodels. The configuration of the robot constant was carriedout similarly to that of the humanoid robot. (a) Linear joint space tra-jectory (b) Joint motion model
Fig. 7: Image series of the two motion models executedby the industrial manipulator. The red marking shows theadditional extension of the elbow at the JMM, which isclearly recognizable.IV. EXPERIMENTAL EVALUATIONInitially, the question was raised as to whether an un-involved and uninformed person could perceive changesduring the movement of a handover and whether he or shecould recognize a similarity to humans in the movement.Consequently, we conducted an experimental study to answerthese questions.
A. Experimental Setup and Procedure
Twenty-five people aged between 22 and 34 participatedin the study. The model was implemented on two robots.The Pepper robot served as a humanoid variant. The KukaYoubot manipulator was used as a non-humanoid variant.This was placed on a 70 cm high table to reach a practicaloverall height. The test person was placed 50 cm in front ofthe robot, but was allowed to move freely in order to changethe perspective on the robot. The robots had a light object inthe gripper before each sequence and were in a typical poseto start the transfer.The participants were divided into two groups withouttheir knowledge. The experimental group consisted of 15persons, the control group of 10 persons. The assignmentwas random. In both groups, the experiment was dividedinto two phases. a) Phase 1:
The subjects were not informed about thesubject and question of the study. They were instructed aboutthe procedure of the experiment and that the robot wantedto hand over an object. The robots then demonstrated theirmovements. The sequence of the robots and the sequenceof the movement model was randomly selected per personto exclude temporal relations and preferences. A linear jointspace trajectory model model (LJST), was used as a countermodel to our movement model [23]. For the elbow flexionof the JMM we have decided on the variant 1, with theadditional extension, to make the difference between themovements more visible. In the experimental group, eachmovement model was executed twice in succession. Betweenthe two models there was no change between the robots.The control group, in which only the LJST was shown fourtimes, is different. After all robots and movements weredemonstrated, the participants were asked to complete aquestionnaire. After completing the questionnaire, the secondphase followed. b) Phase 2:
The subjects were instructed more pre-cisely for this phase. Every subject was told the samething. Translated from German: ”Note the differences in themovements of the arm and joints, for example the shoulderor the elbow. See what looks more human-like to you.”
Theexact differences between the movements were not explainedto them. Nor were any statements made about the similarityto humans of the models. In this phase, the movements ofboth robots were also demonstrated. The experimental groupsaw the two different movement models, while the controlgroup evaluated the same movement twice for each robot.The same questionnaire as in phase 1 was answered againwith an additional question about safety.In this questionnaire, the participants should answer var-ious aspects of the research question subjectively. The fol-lowing questions have been translated from German:1)
Difference:
Have you noticed any difference in themovements of the humanoid/ industrial robot?2)
Human Likeness:
Which movement of the humanoid/industrial robot was most similar to humans?3)
Detail:
How did you identify the likeness to humans?4)
Robot comparison:
Which robot moved more likehumans?5)
Safety:
How safe did you feel about movement 1-4?The subjects were able to quantify the differences andsafety using a Likert scale. One of the robots could beselected by the subjects in the human-like manner. The detail question could be answered with a free text. Finally, the dataon age, size and gender were collected.The separation into control and experimental groups wasundertaken to rule out the possibility that the explanationbetween phase 1 and phase 2 might have too much influenceon the subjects and that they only follow the instructor’sexplanations.
B. Results and Discussion
The results for the single phases and the different robotsand movement models are shown in Fig. 8. In phase one,lmost all the participants in the experimental group find no(66.6%) or just minor (13.3%) differences in the movementsof the humanoid robot. In contrast to the humanoid robotwith an average value of 1.6, the test persons recognize adifference in the industrial manipulator with an average valueof 4.3. This shows that a difference in the motion model isnot perceived by an uninvolved person on a humanoid robot.This could presumably be attributed to the small differencesin movement. The difference became clearer with the indus-trial robot, as the direct movement of the joints made a bigdifference to the motion model due to the kinematics. 73%of subjects did not make any statements about the human-likeness of the movement of the humanoid robot, as theydid not perceive any difference between the movements. Theother part felt the JMM (75%) more human-like than theLJST (25%). This was different with the industrial robots.There all test persons gave an answer. 73.3% felt the JMMmore human-like and 26.6% the LJST. The control groupshowed the expected results in comparison. 90% of subjectssaw no to little differences in the humanoid robot and 100%in the industrial robot. Only one person saw differences andfelt the LJST of the industrial robot and our model of thehumanoid robot as human-like. The final question for phaseone, which robot moves more humanoidly, was answered by96% of all subjects with the humanoid robot and 4% withthe industrial robot.Fig. 8: The development of the perceived differences betweenthe first and second phase of the humanoid robot (HR) andthe industrial manipulator (IR).After the second phase, the test persons answered thequestions again. The development of the perceived differ-ences of the experimental group was decisive for our firsthypothesis. After the explanation, 13.3% of the subjectsnoticed differences, 46.6% several differences and 33.3%strong differences in the movement of the humanoid robot.For the industrial robot, 40% noticed several differences and53.3% strong differences in movements.To statistically verify the differences in the perception be-tween the two phases, a paired-samples t-test was conductedfor each robot in phase 1 and phase 2. For the humanoidrobot there was a significant difference in the scores forphase 1 ( M = 1 . , SD = 0 . ) and phase 2 ( M =4 . , SD = 1 . ); t (14) = − . p = 0 . . For the industrial robot there was no significant differences in thescores between phase 1 ( M = 4 . , SD = 0 . ) andphase 2 ( M = 4 . , SD = 1 . ); t (14) = 0 . , p = 1 . .The results of the control group had to be compared in orderto check whether the trend was only based on the explanationof the instructor or on the perception of the subjects. Acomparison of arithmetic averages and standard deviationsshowed that the trend did not occur in the control group.With the humanoid robot, the evaluation dropped from 1.3(SD: 0.48) to 1.1 (SD: 0:18) and with the industrial robot, theaverage value remained at 1.1 (SD: 0.18). It could be deducedfrom this that subjects were able to perceive differences inthe movements of robots. If the differences were too small,they could only be noticed when the subjects took a closerlook at the movement.After the differences in movements were recognized, thesubjects also made more statements about the human simi-larity of the movement models. After phase two, 93% of thesubjects in the experimental group made a statement on thehuman resemblance of the humanoid robot and 100% of theindustrial robot. In the humanoid robot, 86.6% of the subjectsperceived the JMM as more humanoid and 6.6% the LJST.The values of the industrial manipulator changed slightlycompared to phase 1. 33.3% found the LJST and 66% theJMM to be more human-like. A binominal test indicatedthat the subjects perceived our proposed model significantmore human-like if you do not consider the robot type ( p =0 . , 1-sided). It followed that our proposed model wasperceived as more human-like, both for the industrial robotand for the humanoid robot. When asked why a movementwas perceived as more human-like, seven subjects answeredthat the sequence was more similar to humans, five gavethe elongation of the arm at the end of the movement asthe reason, four subjects found the reference model to beunnatural. Further answers related to speed and certain jointpositions. In addition, the subjects stated that they had feltthe movements of the humanoid robot to be more human-like(93.3%) than those of the industrial robot (6.6%).A paired-samples t-test was conducted to compare thefeeling of safety in conditions of using JMM and LJST.The data of the experimental group were used. For bothrobots together there was a significant difference in thescores for JMM ( M = 4 . , SD = 0 . ) and LJST ( M =3 . , SD = 0 . ); t (29) = 2 . , p = 0 . . These resultssuggest that the JMM really does have an effect on safetyfeeling. Specifically, our results suggest that when a robotuses the JMM during handover, the safety feeling of the userincreases. To distinguish whether this effect applies to bothrobot types, paired-samples t-tests were performed for eachrobot type. It was found that there was no significant differ-ence in the feeling of safety between the two models JMM( M = 4 . , SD = 0 . ) and LJST( M = 4 . , SD = 0 . ) forthe humanoid robot; t (14) = 0 . , p = 0 . . In contrast, thet-test of industrial robots showed a significant difference inthe scores for JMM( M = 3 . , SD = 1 . ) and LJST( M =3 . , SD = 0 . ); t (14) = 3 . , p = 0 . . These resultssuggest that the increase in the feeling of safety is moreronounced in industrial robots than in humanoid robots.However, the non-significant increase can also be attributedto the high average value of the models in the humanoidrobot. In order to test whether there was a different feelingof safety with the two robot models independent of themovement model, another paired-samples t-test was carriedout. The results for the humanoid robot( M = 4 . , SD =0 . ) and the industrial robot ( M = 3 . , SD = 1 . ) showa significant difference; t (29) = 4 . , p = 0 . .V. CONCLUSIONThe aim of this work was to determine a joint model forrobots that makes a handover look more human-like. In addi-tion, it was investigated whether humans perceive differencesin robot movements. For the model, the movements of jointsduring human transfers were first observed and examined.Subsequently, the observations were approximated to char-acteristic functions using curve-fitting. These functions wereused to transfer the movements to the robots. Finally, a studywas carried out in which the movements of test persons wereevaluated. The results of the study showed for humanoidrobots that most users only noticed a difference betweenthe movements when they are pointed out. If the differenceis recognized, the movements of the JMM were perceivedas more human-like. The sequence and a partial gesture,which is similar a prompting gesture, were mentioned as acharacteristic of the human likeness. To validate the results,some of the participants were tested as an control group. Anadditional evaluation criterion was the test persons’ sense ofsafety. The JMM was found to be significantly safer if themodels were viewed independently of the robot. Consideringthe robot type, a significant difference was only found inthe industrial robot, whereas no significant difference can bedetected in the humanoid robot. In future work, we want tocombine the joint model with a trajectory model in orderto achieve an exact position control. The resulting modelhas to be compared with existing human-like solutions todetermine performance. Possible subconscious signals of thesubjects are also to be recorded in order to obtain results forthe feeling of security. Another limitation of this study wasthe positioning of the handover pose. This was static for allpersons. A dynamic approach is planned to adapt the posesto people. R EFERENCES[1] S. Shibata, K. Tanaka, and A. Shimizu, “Experimental analysis ofhanding over,” in
Proceedings of the IEEE International Workshopon Robot and Human Communication . Piscataway: IEEE, 1995, pp.53–58.[2] M. Huber, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer, “Human-robot interaction in handing-over tasks,” in
Proceedings of the IEEEInternational Symposium on Robot and Human Interactive Communi-cation . IEEE, 2008, pp. 107–112.[3] M. Huber, H. Radrich, C. Wendt, M. Rickert, A. Knoll, T. Brandt, andS. Glasauer, “Evaluation of a novel biologically inspired trajectorygenerator in human-robot interaction,” in
Proceeding of the IEEEInternational Symposium on Robot and Human Interactive Commu-nication . IEEE, 2009, pp. 639–644. [4] R. Rasch, S. Wachsmuth, and M. K¨onig, “Understanding movementsof hand-over between two persons to improve humanoid robot sys-tems,” in
Proceedings of the IEEE-RAS International Conference onHumanoid Robotics (Humanoids) . IEEE, 2017, pp. 856–861.[5] K. W. Strabala, M. K. Lee, A. D. Dragan, J. L. Forlizzi, S. Srini-vasa, M. Cakmak, and V. Micelli, “Towards seamless human-robothandovers,”
Journal of Human-Robot Interaction , pp. 112–132, 2013.[6] E. A. Sisbot, R. Alami, T. Simeon, K. Dautenhahn, M. Walters, andS. Woods, “Navigation in the presence of humans,” in
Proceedingsof the IEEE-RAS International Conference on Humanoid Robots .Piscataway, NJ: IEEE Operations Center, 2005, pp. 181–188.[7] J. Mainprice, M. Gharbi, T. Simeon, and R. Alami, “Sharing effortin planning human-robot handover tasks,” in
Proceedings of the IEEESymposium in Robot and Human Interactive Communication , I. Staff,Ed. IEEE, 2012, pp. 764–770.[8] M. K. Lee, J. Forlizzi, S. Kiesler, M. Cakmak, and S. Srinivasa,“Predictability or adaptivity?” in
Proceedings of the internationalconference on Human-robot interaction , A. Billard, Ed. ACM, 2011,p. 179.[9] E. C. Grigore, K. Eder, A. G. Pipe, C. Melhuish, and U. Leonards,“Joint action understanding improves robot-to-human object han-dover,” in
Proceeding of the IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) . IEEE, 2013, pp. 4622–4629.[10] P. Basili, M. Huber, T. Brandt, S. Hirche, and S. Glasauer, “Investigat-ing human-human approach and hand-over,” in
Human Centered RobotSystems , ser. Cognitive Systems Monographs, 6, M. Buss, H. Ritter,G. Sagerer, and R. Dillmann, Eds. Springer-Verlag, 2009, vol. 6, pp.151–160.[11] A. D. Dragan, K. C. Lee, and S. S. Srinivasa, “Legibility andpredictability of robot motion,” in
Proceedings of the ACM/IEEE Inter-national Conference on Human-Robot Interaction (HRI) , H. Kuzuoka,Ed. IEEE, 2013, pp. 301–308.[12] A. Kupcsik, D. Hsu, and W. S. Lee, “Learning dynamic robot-to-human object handover from human feedback,” in
Robotics Research ,ser. Springer Proceedings in Advanced Robotics, A. Bicchi andW. Burgard, Eds. Springer International Publishing, 2018, vol. 2,pp. 161–176.[13] J. R. Medina, F. Duvallet, M. Karnam, and A. Billard, “A human-inspired controller for fluid human-robot handovers,” in
Proceedingsof the IEEE-RAS International Conference on Humanoid Robotics(Humanoids) , T. Asfour and B. Aude, Eds. IEEE, 2016, pp. 324–331.[14] A. G. Eguiluz, I. Rano, S. A. Coleman, and T. M. McGinnity, “Reliableobject handover through tactile force sensing and effort control inthe shadow robot hand,” in
Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA) . IEEE, 2017, pp.372–377.[15] M. Cakmak, S. S. Srinivasa, M. K. Lee, J. Forlizzi, and S. Kiesler,“Human preferences for robot-human hand-over configurations,” in
Proceedings of the IEEE/RSJ International Conference on IntelligentRobots and Systems , I. Staff, Ed. IEEE, 2011, pp. 1986–1993.[16] E. A. Sisbot and R. Alami, “A human-aware manipulation planner,”
IEEE Transactions on Robotics , vol. 28, no. 5, pp. 1045–1057, 2012.[17] G. Tagne, P. Henaff, and N. Gregori, “Measurement and analysis ofphysical parameters of the handshake between two persons accordingto simple social contexts,” in
Proceedings of the IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems . IEEE, 2016,pp. 674–679.[18] E. Delaherche, M. Chetouani, A. Mahdhaoui, C. Saint-Georges,S. Viaux, and D. Cohen, “Interpersonal synchrony: A survey ofevaluation methods across disciplines,”
IEEE Transactions on AffectiveComputing , vol. 3, no. 3, pp. 349–365, 2012.[19] C. M. Harris and D. M. Wolpert, “Signal-dependent noise determinesmotor planning,”
Nature , vol. 394, no. 6695, pp. 780–784, 1998.[20] T. Flash and N. Hogan, “The coordination of arm movements: anexperimentally confirmed mathematical model,”
The Journal of neu-roscience : the official journal of the Society for Neuroscience , vol. 5,no. 7, pp. 1688–1703, 1985.[21] D. Sprute, R. Rasch, A. P¨ortner, S. Battermann, and M. K¨onig,“Gesture-based object localization for robot applications in intelligentenvironments,” in
Proceedings of the Int. Conf. on Intelligent Environ-ments (IE) , to be published.[22] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person2d pose estimation using part affinity fields,” in
CVPR , 2017.[23] P. Corke,