Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Kurumisawa is active.

Publication


Featured researches published by Jun Kurumisawa.


international conference on multimedia computing and systems | 1999

Artistic anatomy based, real-time reproduction of facial expressions in 3D face models

Jun Ohya; Kazuyuki Ebihara; Jun Kurumisawa

The paper proposes a new real time method for reproducing facial expressions in 3D face models realistically based on anatomy for artists. To reproduce facial expressions in a face model, the detected expressions need to be converted to the data for deforming the face model. In the proposed method, an artist who has learned anatomy for artists creates arbitrary facial expressions in the 3D face model by mixing the reference expressions chosen by the artist so that the synthesized expressions realistically represent the respective expressions displayed by real persons. The parameters obtained by this manual operations are used to construct the equations that convert the expression features obtained by the detection module to the displacement vectors of the vertices of the face model. During human communications through face models, the equations are used to reproduce the detected expressions in real time. The effectiveness and robustness of the proposed method were demonstrated by experimental results and demonstration systems.


international conference on human computer interaction | 2015

Enhancing Abstract Imaginations of Viewers of Abstract Paintings by a Gaze Based Music Generation System

Tatsuya Ogusu; Jun Ohya; Jun Kurumisawa; Shunichi Yonemura

The purpose of abstract painters is to let viewers get the various images and abstract images. However, viewers who do not have enough knowledge of art, cannot easily get abstract images. The authors have proposed a music generation system that utilizes viewers’ gazes. It can be expected that the authors’ music generation system can prompt the viewer of abstract paintings to imagine abstract images, which the painter intended to express. This paper explores whether the authors’ music generation system can enhance abstract imaginations of persons who see abstract paintings, by subjective tests. Experiments using 19 subjects and eight abstract paintings were conducted for the two cases in which the subjects see the abstract paintings without hearing any music and while hearing the viewers’ gaze based music generated by the authors’ system. Experimental results imply that “hearing gaze based music” could enhance the viewers’ abstract imagination.


international conference on human-computer interaction | 2014

Inspiring Viewers of Abstract Painting by a Gaze Based Music Generation

Tatsuya Ogusu; Jun Ohya; Jun Kurumisawa; Shunichi Yonemura

This paper explores the effectiveness of prompting abstract paintings’ viewers’ inspiration and imagination by the authors’ gaze based music generation system. The authors’ music generation system detects the viewer’s gaze by a gaze detection equipment. At each of the gaze staying positions in the painting, the color of that point is converted to the sound so that as the gaze moves, music that consists of the converted time series sounds is generated. Experiments using six subjects and six abstract paintings were conducted for the three cases in which the subjects see the abstract paintings without hearing any music, while hearing pre-selected music and while hearing the viewers’ gaze based music generated by the authors’ system. The experimental results imply that “hearing gaze based music” could stimulate the viewers’ inspiration and imagination best, “hearing pre-selected music” second best, and “without music” third best.


electronic imaging | 2011

Study of recognizing human motion observed from an arbitrary viewpoint based on decomposition of a tensor containing multiple view motions

Takayuki Hori; Jun Ohya; Jun Kurumisawa

We propose a Tensor Decomposition based algorithm that recognizes the observed action performed by an unknown person and unknown viewpoint not included in the database. Our previous research aimed motion recognition from one single viewpoint. In this paper, we extend our approach for human motion recognition from an arbitrary viewpoint. To achieve this issue, we set tensor database which are multi-dimensional vectors with dimensions corresponding to human models, viewpoint angles, and action classes. The value of a tensor for a given combination of human silhouette model, viewpoint angle, and action class is the series of mesh feature vectors calculated each frame sequence. To recognize human motion, the actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for each combination of action, person, and viewpoint. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. The recognition results show the validity of our proposed method, the method is experimentally compared with Nearest Neighbor rule. Our proposed method is very stable as each action was recognized with over 75% accuracy.


IEEE MultiMedia | 1999

Virtual metamorphosis

Jun Ohya; Jun Kurumisawa; Ryohei Nakatsu; Kazuyuki Ebihara; Shoichiro Iwasawa; David Harwood; Thanarat Horprasert


robot and human interactive communication | 1996

Virtual Kabuki Theater: towards the realization of human metamorphosis systems

Jun Ohya; Kazuyuki Ebihara; Jun Kurumisawa; Ryohei Nakatsu


international conference on human-computer interaction | 2001

Augmented Reality Interface for Electronic Music Performance

Ivan Poupyrev; Rodney Berry; Mark Billinghurst; Hirokazu Kato; Keiko Nakao; Lewis Baldwin; Jun Kurumisawa


Archive | 2004

Sensory drawing apparatus

Shunsuke Yoshida; Jun Kurumisawa; Haruo Noma; Nobuji Tetsutani


acm multimedia | 2004

Sumi-nagashi: creation of new style media art with haptic digital colors

Shunsuke Yoshida; Jun Kurumisawa; Haruo Noma; Nobuji Tetsutani; Kenichi Hosaka


ICAT | 2003

Real-time Method for Animating Elastic Objects' Behaviors Including Collisions

Takafumi Watanabe; Jun Ohya; Jun Kurumisawa; Yukio Tokunaga

Collaboration


Dive into the Jun Kurumisawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yukio Tokunaga

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Haruo Noma

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Shunichi Yonemura

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shunsuke Yoshida

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Ryohei Nakatsu

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge