Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yutaka Ishii is active.

Publication


Featured researches published by Yutaka Ishii.


International Journal of Human-computer Interaction | 2004

Visualization of respiration in the embodied virtual communication system and its evaluation

Tomio Watanabe; Masamichi Ogikubo; Yutaka Ishii

A proposed embodied virtual communication system provides a virtual face-to-face communication environment in which two remote talkers can share embodied interaction by observing their interaction with two types of avatars. One is VirtualActor, a human avatar that represents talker communicative motion and respiratory motion. The other is VirtualWave, an abstract avatar that expresses human behavior and respiration by simplified wave motion. By using the system for the analysis by synthesis of embodied communication, the effectiveness of the visualization of respiration in VirtualActor and VirtualWave is demonstrated by the analysis of the entrainment of interaction and the sensory evaluation in remote communication.


international conference on human interface and management of information | 2011

A virtual audience system for enhancing embodied interaction based on conversational activity

Yoshihiro Sejima; Yutaka Ishii; Tomio Watanabe

In this paper, we propose a model for estimating conversational activity based on the analysis of enhanced embodied interaction, and develop a virtual audience system. The proposed model is applied to a speech-driven embodied entrainment wall picture, which is a part of the virtual audience system, for promoting enhanced embodied interaction. This system generates activated movements based on the estimated value of conversational activity in enhanced interaction and provides a communication environment wherein embodied interaction is promoted by the virtual audience. The effectiveness of the system was demonstrated by means of sensory evaluations and behavioral analysis of 20 pairs of subjects involved in avatar-mediated communication.


robot and human interactive communication | 2004

An embodied video communication system in which own VirtualActor is superimposed for virtual face-to-face scene

Yutaka Ishii; T. Watanabe

We have demonstrated the importance of sharing the embodied interaction in communication by using the embodied virtual communication system in which talkers can share the same virtual space through VirtualActors as their avatars that represent their interactive behavior. This present paper proposes the concept of an embodied video communication system in which a VirtualActor is superimposed on the other talkers video image, and develops the system. Sensory evaluation and human interaction analysis demonstrate the effectiveness of the system in communication experiment of comparison with the scene in which a reduced own video image is superimposed on the other talkers video image. The system provides a new transmission of interaction awareness for human communication.


semantics and digital media technologies | 2009

Method for Identifying Task Hardships by Analyzing Operational Logs of Instruction Videos

Junzo Kamahara; Takashi Nagamatsu; Yuki Fukuhara; Yohei Kaieda; Yutaka Ishii

We propose a new identification method that aids in the development of multimedia contents of task instruction. Our method can identify the difficult parts of a task in an instruction video by analyzing the operation logs of multimedia player used by a user to understand the difficult parts. The experimental results show that we can identify those video segments that the learners find difficult to learn from. This method could also identify the hardships that the expert did not anticipate.


robot and human interactive communication | 2000

Evaluation of an embodied virtual communication system for human interaction analysis by synthesis

Yutaka Ishii; Tomio Watanabe

An embodied virtual face-to-face communication system with two types of avatars for human interaction analysis by synthesis is developed: one is a VirtualActor (VA) as a human avatar which represents interactive behavior such as the motions of head, arms and body, the other is a VirtualWave (VW) as an abstract avatar in which human behavior is simplified as the motion of wave to clarify an essential role of interaction. The system provides networked virtual communication environment in which two remote talkers can share embodied interaction through their VA or VW including themselves in the same virtual space. The effectiveness of the system is demonstrated by the sensory evaluation and behavioral analysis of the communication experiment in 13 pairs of 26 talkers under various conditions such as the time lag of motion and voice. The importance of mutual embodied sharing in communication is also clarified The system would be expected to form the foundation of information media and communication technologies as well as the methodology for the analysis and understanding of various human interactions.


ieee/sice international symposium on system integration | 2014

Development of a nursing communication education support system using nurse-patient embodied avatars with a smile and eyeball movement model

Mayo Yamamoto; Noriko Takabayashi; Koki Ono; Tomio Watanabe; Yutaka Ishii

Facial expressions, such as a smile and gaze line, play an important role in nursing communication. Nursing students need to learn and understand how to communicate with patients through experience. We have developed a nursing communication education support system using embodied avatars; however, these avatars cannot exhibit facial expressions. In this study, an advanced nursing communication education support system using embodied avatars that have a smile and eyeball movement model is developed so as to improve the effectiveness of nursing communication education support. In addition, a communication experiment examines the effectiveness of the smile and eyeball movement model for nurse-patient avatars.


robot and human interactive communication | 2008

Evaluation of embodied avatar manipulation based on talker’s hand motion by using 3D trackball

Yutaka Ishii; Kouzi Osaki; Tomio Watanabe; Yoshihiro Ban

Remote talkers can communicate smoothly via their embodied avatars, which represent their interactive behaviors in the same virtual space. In order to enable virtual face-to-face communication, we have developed an embodied avatar-mediated communication system by using a human avatar called ldquoVirtualActorrdquo within the same communication space. The effectiveness of the system has been confirmed by communication experiments. We have already proposed the concept of a virtual communication avatar system in which remote talkers under the restricted conditions can operate their own avatar based on their hand motion, and have developed the prototype of this system using a glove sensor. In this study, communication systems using three pointing devices-a wireless mouse, trackball, and 3D mouse-are developed for primitive interaction using conscious hand motion input in a general PC environment. Instead of a simple trackball, a 3D trackball, which has one handle ball involving yaw rotation as well as pitch and roll rotation, is developed in order to manipulate the avatarpsilas head motion in a more intuitive manner. A communication experiment in which the 3D trackball, wireless mouse, trackball, and 3D mouse are compared is conducted for 15 pairs of talkers (30 in all); a sensory evaluation and analysis of embodied avatar manipulation demonstrate the effectiveness of the proposed system.


international conference on human interface and management of information | 2013

Evaluation of superimposed self-character based on the detection of talkers' face angles in video communication

Yutaka Ishii; Tomio Watanabe

We build upon an embodied video chat system, called E-VChat, in which an avatar is superimposed on the other talkers video images to improve the mutual interaction in remote communications. A previous version of this system used a headset-type motion capture device. In this paper, we propose an advanced E-VChat system that uses image processing to sense the talkers head motion without wearing sensors. Moreover, we confirm the effectiveness of the superimposed avatar for face-to-face communication in an experiment.


international universal communication symposium | 2010

Effects of delayed presentation of self-embodied avatar motion with network delay

Yutaka Ishii; Yoshihiro Sejima; Tomio Watanabe

A large network delay is likely to obstruct human interaction in telecommunication systems such as telephony or video conferencing systems. In spite of the extensive investigations that have been carried out on network delays of voice and image data, there have been few studies regarding support for embodied communication under the conditions of network delay. To maintain smooth human interaction, it is important that the various ways in which delay is manifested are understood. We have already developed an embodied virtual communication system that uses an avatar called “VirtualActor,” in which speakers who are remotely located from one another can share embodied interaction in the same virtual space. Responses to a questionnaire that was used in a communication experiment confirmed that a fixed 500-ms network delay has no effect on interactions via VirtualActors. In this paper, we propose a method of presenting a speakers voice and an avatars motion feedback in the case of a 1.5-s network delay using VirtualActors. We perform two communication experiments under different conditions of network delay. The aim of the first experiment is to examine the effect of a random time delay on the conversation. The second experiment is conducted under the conditions of a free-form conversation that takes place in 5 scenarios—1 real-time scenario without a network delay and 4 scenarios with network delay that involve a combination of a delay in the talkers voice and in his/her avatars motion feedback. The subjects consisted of a total of 30 students who worked in 15 pairs and who were familiar with each other. A sensory evaluation shows the effects upon communication of delays in the avatars motion feedback, from the viewpoint of supporting the interaction.


international conference on user modeling adaptation and personalization | 2010

Instructional video content employing user behavior analysis: time dependent annotation with levels of detail

Junzo Kamahara; Takashi Nagamatsu; Masashi Tada; Yohei Kaieda; Yutaka Ishii

We develop a multimedia instruction system for the inheritance of skills This system identifies the difficult segments of video by analyzing user behavior Difficulties may be inferred by the learners requiring more time to fully process a portion of video; they may replay or pause the video during the course of a segment, or play it at a slow speed These difficult video segments are subsequently assumed to require the addition of expert, instructor annotations, in order to enable learning We propose a time-dependent annotation mechanism, employing a level of detail (LoD) approach This annotation is superimposed upon the video, based on the users selected speed of playback The LoD, which reflects the difficulty of the training material, is used to adapt whether to display the annotation to the user We present the results of an experiment that describes the relationship between the difficulty of material and the LoDs.

Collaboration


Dive into the Yutaka Ishii's collaboration.

Top Co-Authors

Avatar

Tomio Watanabe

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Yoshihiro Sejima

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koki Ono

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Mayo Yamamoto

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Noriko Takabayashi

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiraku Shikata

Okayama Prefectural University

View shared research outputs
Top Co-Authors

Avatar

Kouzi Osaki

Okayama Prefectural University

View shared research outputs
Researchain Logo
Decentralizing Knowledge