Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junji Yamato is active.

Publication


Featured researches published by Junji Yamato.


international conference on multimodal interfaces | 2007

Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances

Kazuhiro Otsuka; Hiroshi Sawada; Junji Yamato

A novel probabilistic framework is proposed for analyzing cross-modal nonverbal interactions in multiparty face-to-face conversations. The goal is to determine who responds to whom, when, and how from multimodal cues including gaze, head gestures, and utterances. We formulate this problem as the probabilistic inference of the causal relationship among participants behaviors involving head gestures and utterances. To solve this problem, this paper proposes a hierarchical probabilistic model; the structures of interactions are probabilistically determined from high-level conversation regimes (such as monologue or dialogue) and gaze directions. Based on the model, the interaction structures, gaze, and conversation regimes, are simultaneously inferred from observed head motion and utterances, using a Markov chain Monte Carlo method. The head gestures, including nodding, shaking and tilt, are recognized with a novel Wavelet-based technique from magnetic sensor signals. The utterances are detected using data captured by lapel microphones. Experiments on four-person conversations confirm the effectiveness of the framework in discovering interactions such as question-and-answer and addressing behavior followed by back-channel responses.


international conference on multimodal interfaces | 2013

Predicting next speaker and timing from gaze transition patterns in multi-party meetings

Ryo Ishii; Kazuhiro Otsuka; Shiro Kumano; Masafumi Matsuda; Junji Yamato

In multi-party meetings, participants need to predict the end of the speakers utterance and who will start speaking next, and to consider a strategy for good timing to speak next. Gaze behavior plays an important role for smooth turn-taking. This paper proposes a mathematical prediction model that features three processing steps to predict (I) whether turn-taking or turn-keeping will occur, (II) who will be the next speaker in turn-taking, and (III) the timing of the start of the next speakers utterance. For the feature quantity of the model, we focused on gaze transition patterns near the end of utterance. We collected corpus data of multi party meetings and analyzed how the frequencies of appearance of gaze transition patterns differs depending on situations of (I), (II), and (III). On the basis of the analysis, we construct a probabilistic mathematical model that uses the frequencies of appearance of all participants gaze transition patterns. The results of an evaluation of the model show the proposed models succeed with high precision compared to ones that do not take gaze transition patterns into account.


Face and Gesture 2011 | 2011

Analyzing empathetic interactions based on the probabilistic modeling of the co-occurrence patterns of facial expressions in group meetings

Shiro Kumano; Kazuhiro Otsuka; Dan Mikami; Junji Yamato

This paper presents a novel research framework for the estimation of emotional interactions produced between meeting participants. The types of emotional interaction targeted in this paper are empathy, antipathy, and unconcern. We define here emotional interaction as a brief contiguous event wherein a pair exchange emotional messages via verbal and non-verbal behaviors. As the key behaviors, we focus on facial expression and gaze, because their combination realizes the rapid and directed transmission of a large number of emotional messages. We assume that there is a strong link between the emotional interaction and the participants facial expressions that occur simultaneously with the type of the emotional interactions. Based on this assumption, we build a probabilistic model that represents a hierarchical structure involving the emotional interactions, facial expressions and other behaviors including utterance and gaze direction. Using this model, the type of emotional interaction is estimated from interpersonal gaze directions, facial expressions, and utterances. Our estimation is based on the Bayesian approach, and uses the Markov chain Monte Carlo method to approximate joint posterior probability distributions of the emotional interaction and model parameters present within the observed data. An experiment on four-party conversations demonstrates the promising effectiveness of the proposed method.


human factors in computing systems | 2005

Automatic video editing system using stereo-based head tracking for multiparty conversation

Yoshinao Takemae; Kazuhiro Otsuka; Junji Yamato

This paper presents an automatic video editing system based on head tracking for multiparty conversations. Systems that record meetings and those that support teleconferences are attracting considerable interest. Conventional systems use a fixed-viewpoint camera and simple camera selection based on participants utterances. However, conventional systems fail to adequately convey who is talking to whom to the viewer. We focus on the participants head orientation since this information is useful in detecting the speaker and who the speaker is talking to. In order to automatically estimate each participants head orientation, our system combines several modules for stereo-based head tracking. The system selects the shot of the participant that most participants are looking at, based on majority decision. Experiments confirm the effectiveness of our system in several 3-participant conversations. The results show that our system can more successfully convey who is talking to whom which is an extremely crucial piece of information that allows the viewer to better under-stand conversation content.


international conference on multimedia and expo | 2006

Poster Image Matching by Color Scheme and Layout Information

Cheng-Yao Chen; Takayuki Kurozumi; Junji Yamato

In this paper, we demonstrate a novel poster image matching system for wireless multimedia applications. We propose a method that incorporates both color and layout information of the poster image to achieve a robust performance in poster image matching. We apply both color compensation and background separation to extract a poster from an image effectively. Based on our experiment, we show that even under the effects of lighting, image rotation, scaling, and occlusion, our system can still maintain high recall and precision. We also show that our system can recognize the correct image from a database which contains several poster images with similar features. Finally, the promising performance of the poster image matching encourages us to further enrich the information retrieval for wireless environment


active media technology | 2005

Development of automatic video editing system based on stereo-based head tracking for multiparty conversations

Yoshinao Takemae; Kazuhiro Otsuka; Junji Yamato

This paper presents an automatic video editing system based on head tracking for multiparty conversations. Archiving meetings is attracting considerable interest. Conventional systems use a fixed-viewpoint camera and simple camera selection based on participants utterances. However, conventional systems fail to adequately convey to the viewers who is talking to whom. We focus on the participants head orientation since this information is useful in detecting the speaker and who the speaker is talking to. In order to automatically estimate each participants head orientation, our system combines modules for stereo-based head tracking. The system selects the shot of the participant that most participants are looking at, based on majority decision. Experiments confirm the effectiveness of our system in several 3-participant conversations.


international conference on multimedia and expo | 2005

Effects of Automatic Video Editing System Using Stereo-Based Head Tracking for Archiving Meetings

Yoshinao Takemae; Kazuhiro Otsuka; Junji Yamato

This paper presents an automatic video editing system based on head tracking for archiving meetings. Systems that archive meetings are attracting considerable interest. Conventional systems use a fixed-viewpoint camera and simple camera selection based on participants utterances. However, conventional systems fail to adequately convey who is talking to whom and nonverbal information about participants etc. We focus on the participants head orientation since this information is useful in detecting the speaker and who the speaker is talking to. In order to automatically estimate each participants head orientation, our system combines several modules to realize stereo-based head tracking. The system selects the shot of the participant that most participants are looking at, based on majority decision. Experiments on presenting videos to viewers confirm the effectiveness of our system in several 3-participant conversations


Archive | 1997

Walking pattern processing method and system for embodying the same

Junji Yamato; Kyoko Sudo; Akira Tomono; Masanobu Arai


Archive | 2009

Prominent area image generating method, prominent area image generating device, program, and recording medium

Shogo Kimura; Junji Yamato; 淳司 大和; 昭悟 木村


Archive | 1996

Walking pattern processing method and device therefor

Masanobu Arai; Kyoko Sudo; Akira Tomono; Junji Yamato; 明 伴野; 淳司 大和; 恭子 数藤; 雅信 新井

Collaboration


Dive into the Junji Yamato's collaboration.

Top Co-Authors

Avatar

Kazuhiro Otsuka

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Keiji Hirata

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshinari Shirai

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshinao Takemae

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenichiro Ishii

Nippon Telegraph and Telephone

View shared research outputs
Researchain Logo
Decentralizing Knowledge