Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihiro Sejima is active.

Publication


Featured researches published by Yoshihiro Sejima.


international symposium on universal communication | 2008

Analysis by Synthesis of Embodied Communication via VirtualActor with a Nodding Response Model

Yoshihiro Sejima; Tomio Watanabe; Michiya Yamamoto

In this study, we develop the embodied virtual communication system with the speech-driven nodding response model for the analysis by synthesis of embodied communication. Using the proposed system in embodied virtual communication, we perform experiments and carry out sensory evaluation and voice-motion analysis to demonstrate the effects of nodding responses on a talkers avatar called VirtualActor. The result of the study shows that superimposed nodding responses in a virtual .space promote communication.


international conference on human interface and management of information | 2011

A virtual audience system for enhancing embodied interaction based on conversational activity

Yoshihiro Sejima; Yutaka Ishii; Tomio Watanabe

In this paper, we propose a model for estimating conversational activity based on the analysis of enhanced embodied interaction, and develop a virtual audience system. The proposed model is applied to a speech-driven embodied entrainment wall picture, which is a part of the virtual audience system, for promoting enhanced embodied interaction. This system generates activated movements based on the estimated value of conversational activity in enhanced interaction and provides a communication environment wherein embodied interaction is promoted by the virtual audience. The effectiveness of the system was demonstrated by means of sensory evaluations and behavioral analysis of 20 pairs of subjects involved in avatar-mediated communication.


robot and human interactive communication | 2014

Development of an interaction-activated communication model based on a heat conduction equation in voice communication

Yoshihiro Sejima; Tomio Watanabe; Mitsuru Jindai

In a previous study, we developed an embodied virtual communication system for human interaction analysis by synthesis in avatar-mediated communication and confirmed the close relationship between speech overlap and the period for activating embodied interaction and communication through avatars. In this paper, we propose an interaction-activated communication model based on the heat conduction equation in heat-transfer engineering for enhancing empathy between a human and a robot during embodied interaction in avatar-mediated communication. Further, we perform an evaluation experiment to demonstrate the effectiveness of the proposed model in estimating the period of interaction-activated communication in avatar-mediated communication. Results suggest that the proposed model is effective in estimating interaction-activated communication.


robot and human interactive communication | 2009

An embodied virtual communication system with a speech-driven embodied entrainment picture

Yoshihiro Sejima; Tomio Watanabe

We have already developed an embodied virtual communication system for human interaction analysis by synthesis. This system provides two remote talkers with a communication environment in which embodied interaction is shared by VirtualActors including the talkers themselves through a virtual face-to-face scene. We confirmed the importance of embodied sharing in embodied communication by using the analysis-by-synthesis system. We have also demonstrated the effects of nodding responses for embodied interaction and communication support. In this paper, we develop an embodied virtual communication system with a speech-driven embodied entrainment picture “InterPicture” for supporting virtual communication. The effects of the developed system are demonstrated by a sensory evaluation and speech-overlap analysis in the communication experiment for 20 pairs of talkers.


robot and human interactive communication | 2015

Development of an expressible pupil response interface using hemispherical displays

Yoshihiro Sejima; Yoichiro Sato; Tomio Watanabe

We have analyzed the entrainment between a speakers speech and a listeners nodding in face-to-face communication, and developed iRT (InterRobot Technology) to generate a variety of communicative actions and movements such as nodding and body movements by using a speech input based on the entrainment analysis. In this study, to conduct basic research for realizing smooth communication during embodied interactions between humans and robots, we focus on the pupil response, which is related to human emotions during such interactions. We analyze the pupil response in human face-to-face communication by using an embodied communication system with a line-of-sight measurement device. On the basis of this analysis, using hemispherical displays, we develop an expressible pupil response interface in which the iRT is applied to enhance embodied interaction between humans and robots. This system enables expression of the pupil response by using only speech input. In addition, the effectiveness of the developed system is demonstrated experimentally.


ieee/sice international symposium on system integration | 2016

A laughing-driven pupil response system for inducing empathy

Shoichi Egawa; Yoshihiro Sejima; Yoichiro Sato; Tomio Watanabe

Laughing response plays an important role in supporting human interaction and communication, and enhances empathy by sharing laughter each other. Therefore, in order to develop communication systems which enhance empathy, it is desired to design the media representation using the pupil response which is related to affective response such as pleasure-unpleasure. In this paper, we aim to enhance empathy during human and robot interaction and communication, and develop a pupil response system for inducing empathy by laughing response using hemispherical display. In addition, we evaluate the pupil response with the laughing response by using the developed system. The results demonstrate that the dilated pupil response with laughing response is effective for enhancing empathy.


robot and human interactive communication | 2012

A speech-driven embodied group entrainment system with the model of lecturer's eyeball movement

Yoshihiro Sejima; Tomio Watanabe; Mitsuru Jindai; Atsushi Osa; Yukari Zushi

We have already developed a speech-driven embodied group entrained communication system called “SAKURA” for activating group interaction and communication. In this system, a speech-driven computer graphics (CG) characters called InterActors with functions of both speaker and listener are entrained to one another as a teacher and some students in a virtual classroom by generating communicative actions and movements. In this study, for the basic research of realizing smooth communication during embodied interaction between human and robot, we analyzed the eyeball movements of a lecturer communicating in a virtual group by using an embodied communication system with a line-of-sight measurement device. On the basis of the analysis results, we propose an eyeball movement model that consists of a saccade model and a model of a lecturers gaze at an audience, called “group gaze model.” Then, we developed an advanced communication system in which the proposed model was used with SAKURA for enhancing group interaction and communication. This advanced system generates a lecturers eyeball movement on the basis of the proposed model by using only speech input. We used sensory evaluation in the experiments to determine the effects of the proposed model. The results showed that the system with the proposed model is effective in group interaction and communication.


international universal communication symposium | 2010

Effects of delayed presentation of self-embodied avatar motion with network delay

Yutaka Ishii; Yoshihiro Sejima; Tomio Watanabe

A large network delay is likely to obstruct human interaction in telecommunication systems such as telephony or video conferencing systems. In spite of the extensive investigations that have been carried out on network delays of voice and image data, there have been few studies regarding support for embodied communication under the conditions of network delay. To maintain smooth human interaction, it is important that the various ways in which delay is manifested are understood. We have already developed an embodied virtual communication system that uses an avatar called “VirtualActor,” in which speakers who are remotely located from one another can share embodied interaction in the same virtual space. Responses to a questionnaire that was used in a communication experiment confirmed that a fixed 500-ms network delay has no effect on interactions via VirtualActors. In this paper, we propose a method of presenting a speakers voice and an avatars motion feedback in the case of a 1.5-s network delay using VirtualActors. We perform two communication experiments under different conditions of network delay. The aim of the first experiment is to examine the effect of a random time delay on the conversation. The second experiment is conducted under the conditions of a free-form conversation that takes place in 5 scenarios—1 real-time scenario without a network delay and 4 scenarios with network delay that involve a combination of a delay in the talkers voice and in his/her avatars motion feedback. The subjects consisted of a total of 30 students who worked in 15 pairs and who were familiar with each other. A sensory evaluation shows the effects upon communication of delays in the avatars motion feedback, from the viewpoint of supporting the interaction.


international conference on human interface and management of information | 2018

A Video Communication System with a Virtual Pupil CG Superimposed on the Partner’s Pupil

Yoshihiro Sejima; Ryosuke Maeda; Daichi Hasegawa; Yoichiro Sato; Tomio Watanabe

Pupil response plays an important role in expression of talker’s affect. Focusing on the pupil response in human voice communication, we analyzed the pupil response in embodied interaction, and demonstrated that the speaker’s pupil was clearly dilated during the burst-pause of utterance. In addition, it was confirmed that the pupil response is effective for enhancing affective conveyance by using the developed system in which an interactive CG character generates the pupil response based on the burst-pause of utterance. In this study, we develop a video communication system with a virtual pupil CG superimposed on the partner’s pupil for enhancing affective conveyance. This system generates a virtual pupil response in synchronization of the talker’s utterance. The effectiveness of the system is demonstrated by means of sensory evaluations of 12 pairs of subjects in video communication.


society of instrument and control engineers of japan | 2017

Proposal of a pupil response model synchronized with burst-pause of utterance based on the heat conduction equation

Shoichi Egawa; Yoshihiro Sejima; Ryosuke Maeda; Yoichiro Sato; Tomio Watanabe

In our previous study, we performed the analysis of pupil response during his or her utterance by using a pupil measurement device and demonstrated that speakers pupil response dilates with synchronization to the burst-pause of utterance. In addition, we developed a pupil response robot called “Pupiloid” that generates the pupil response with mechanical structure, and demonstrated that the pupil response is effective to express own affect. In this paper, in order to enhance affective conveyance, we propose a pupil response model synchronized with burst-pause of utterance based on the heat conduction equation in human-robot interaction. This model estimates the degree of affective conveyance and generates the pupil response based on the estimated value.

Collaboration


Dive into the Yoshihiro Sejima's collaboration.

Top Co-Authors

Avatar

Tomio Watanabe

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Mitsuru Jindai

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Yoichiro Sato

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Shoichi Egawa

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Ryosuke Maeda

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Yutaka Ishii

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koki Ono

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge