Matthieu Destephe
Waseda University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthieu Destephe.
international conference of the ieee engineering in medicine and biology society | 2013
Matthieu Destephe; Takayuki Maruyama; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
Walking is one of the most common activities that we perform every day. Even if the main goal of walking is to move from one place to another place, walking can also convey emotional clues in social context. Those clues can be used to improve interactions or any messages we want to express. However, there are not many studies on the effects of the intensity of the emotions on the walking. In this paper, the authors propose to assess the differences between the expression of emotion regarding the expressed intensity (low, middle, high and exaggerated). We observed two professional actors perform emotive walking, with different intensities and we analyzed the recorded data. For each emotion, we analyzed characteristic features which can be used in the future to model gait patterns and to recognize emotions from the gait parameters. Additionally, we found characteristics which can be used to create new emotion expression for our biped robot Kobian, improving the human-robot interaction.
Frontiers in Psychology | 2015
Matthieu Destephe; Martim Brandao; Tatsuhiro Kishi; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
human-robot interaction | 2013
Matthieu Destephe; Takayaki Maruyama; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
Walking is one of the most common activities that we perform every day. If the main goal of walking is to go from a point A to a point B, walking can also convey emotional clues in social context. Those clues can be used to improve interactions or any messages we want to express. We observed a professional actress perform emotive walking and analyzed the recorded data. For each emotion, we found characteristic features which can be used to model gait patterns for humanoid robots. The findings were assessed by subjects who were asked to recognize the emotions displayed in the acts of walking.
20th CISM-IFToMM Symposium on Theory and Practice of Robots and Manipulators, ROMANSY 2014 | 2014
Scean Mitchell; Gabriele Trovato; Matthieu Destephe; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
The design of what a robot could look like is a matter of growing importance. Variations of style, size, shape and colour open endless design possibilities. It is important to create a look that poses no visual uncanny valley effects on the human user and that is appropriate to potentially serve in different job areas in human society. In this paper we want to share the methods applied in the design of a new head for the humanoid robot KOBIAN-R. Our creation process is similar to that of the product design process taking into account the psychology of shape, colour and functionality to name a few. Following the creative process we conducted some surveys to assess our new design. Feedback data from participants from a diverse age range and cultural backgrounds are a precious input towards the future development of this robotic head.
robotics and biomimetics | 2014
Matthieu Destephe; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
While robots are often used in autism therapy, the Uncanny valley effect was never studied in subjects with Autistic Spectrum Disorder (ASD). Since persons with ASD have trouble understanding body language, they react differently to the Uncanny valley. In this paper, we propose to investigate the possible difference in the Uncanny valleys perception of an emotional humanoid robot in subjects with ASD and subjects without ASD. Thirty four adult participants (N = 34, control: 19, ASD: 15; age: 28.5) were asked to watch videos of an emotional humanoid robot and rate its emotions and its gait (Perceived Humanness, Eeriness and Attractiveness). We have found differences between the two groups in their perception of the robots Perceived Humanness (p <;.05). Also, while the ASD group performed as well as the control group for emotion recognition task, we found that the ASD group is more sensible to the Uncanny valley effects than the control group. Finally we conclude on what our findings bring to the Human Robot Interaction field.
robot and human interactive communication | 2013
Matthieu Destephe; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
The understanding of emotions in humans is really important in the Human-Robot Interaction field. The affective state of a person can be expressed in several ways, one of them being through the way we walk and our gait can convey emotional clues in social context. Those clues can be used to improve the personal interactions with our peers or add meaning to any message we want to express. However, only a few studies in humanoid robotics were done on the effects of the emotions on the walking. In this paper, we propose to assess the emotional walking patterns created from motion capture data with a survey. Those patterns represent different emotions (sadness, happiness) with different intensities (middle, high and exaggerated). Those emotional walking patterns achieved a high recognition rate of the emotions and the subjects (N=13) could recognize whole body emotions without facial expression on our humanoid robot. Additionally, we found out that at first people might perform poorly at recognizing emotions and their intensities but can get better, even without correction or feedback on their performances.
ieee-ras international conference on humanoid robots | 2014
Takuya Otani; T. George; Kazuhito Uryu; Masaaki Yahara; A. Iizuka; Shinya Hamamoto; Shunsuke Miyamae; Kenji Hashimoto; Matthieu Destephe; Masanori Sakaguchi; Yasuo Kawakami; Hun-ok Lim; Atsuo Takanishi
In this paper, we describe the development of a leg with a rotational joint that mimics the elastic characteristics of the leg of a running human. The purpose of this development was to realize the dynamics of human running, the analysis of which has revealed that the motion of the leg can be modeled by a compression spring and that of the leg joint by a torsion spring. We, therefore, assumed that these elastic characteristics could be used to develop robots capable of human-like running, which requires higher output power than that of existing humanoid robots. Hence, we developed a model of a leg with a rotational joint and fabricated the leg by incorporating a mechanism comprising of two leaf springs for adjusting the joint stiffness. By this means, we were able to achieve human-like joint stiffness, which could be adjusted by varying the effective length of one of the leaf springs. We evaluated the performance of the adjustable stiffness of the joints, and were also able to achieve hopping by resonance of the rotational leg joint.
20th CISM-IFToMM Symposium on Theory and Practice of Robots and Manipulators, ROMANSY 2014 | 2014
Tatsuhiro Kishi; Hajime Futaki; Gabriele Trovato; Nobutsuna Endo; Matthieu Destephe; Sarah Cosentino; Kenji Hashimoto; Atsuo Takanishi
This paper describes the development of a robotic head with ability to display marks commonly used in “manga” (Japanese comics). To communicate with humans, robots should have an expressive facial expression ability for indicating its inner state. Our previous research suggests that, robots can express its emotion clearly if it perform facial expressions that can adapt with the cultural background of the communication partner. As a first step, we focus on making expressions for Japanese people. Manga mark is a unique and famous way of emotion expression in Japanese culture. In a previous preliminary experiment, we determined facial expressions for the robot KOBIAN-R with manga marks. Those expressions included four manga marks as “Cross popping veins” for “Anger”, “Tear mark” for “Sadness”, “Vertical lines” for “Fear” and “Wrinkle” for “Disgust”. A new head that express these marks was developed. Flexible full color LED matrix display and mechanism for indicating black lines were implemented. Experimental evaluation shows that the new robotic head has over 90 % average emotion recognition rates by 30 Japanese participants for each of the six emotions.
international conference of the ieee engineering in medicine and biology society | 2015
Guillermo Enriquez; Matthieu Destephe; Shuji Hashimoto; Atsuo Takanishi
From new hardware arise possibilities to develop novel methods of monitoring human behavior. In this paper we present a low cost system using two RGB-D cameras in a 3m × 8m space. Using developed software, we are able to easily collect, combine, visualize, modify, and analyze data. To validate the system, we measured human behavior in a walking experiment (N = 11). The data obtained from the system showed an accurate measurement and validated our approach for Human Interaction analysis.
intelligent robots and systems | 2014
Tatsuhiro Kishi; Hajime Futaki; Gabriele Trovato; Nobutsuna Endo; Matthieu Destephe; Sarah Cosentino; Kazuo Hashimoto; Atsuo Takanishi
This paper describes the development of a robotic head that has cartoon facial expression ability with comic marks. For communicating with humans, robots should have expressive facial expression ability for indicating their inner state. Our previous research suggests that robots can express its emotion clearly if it performs facial expressions that are adapted to the cultural background of the communication partner. As a first step, we focus on making expressions for Japanese people. Comic mark is a unique and famous way of emotion expression in Japanese culture. First, we defined facial expressions by combining cartoon-like shape of the facial parts with high emotion recognition rates. Then we asked cartoonists to draw comic marks which they think are effective for emotion expression and find the effective comic marks as “Cross popping veins” for “Anger”, “Tear mark” for “Sadness” and “Vertical lines” for “Fear”. Finally we obtained a model expression which has sufficiently high emotion recognition rate from the combination of the facial expression and the comic marks. In order to achieve these expressions, we developed flexible full color LED display matrix module and mechanism that push and pull the sheet for expressing black lines. Results of experimental evaluation shows that the new robotic head has over 90% average emotion recognition rates for each of the six basic emotions. The results with non-Japanese subjects suggests that impression of emotion expression on robotic head changes depending on the cultural background. These findings encourage us in pursuing this concept of designing robots that display emotions that are adapted to cultural background of communication partner.