Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masaki Oshita is active.

Publication


Featured researches published by Masaki Oshita.


Computer Graphics Forum | 2001

A Dynamic Motion Control Technique for Human-like Articulated Figures

Masaki Oshita; Akifumi Makinouchi

This paper presents a dynamic motion control technique for human‐like articulated figures in a physically based character animation system. This method controls a figure such that the figure tracks input motion specified by a user. When environmental physical input such as an external force or a collision impulse are applied to the figure, this method generates dynamically changing motion in response to the physical input. We have introduced comfort and balance control to compute the angular acceleration of the figures joints. Our algorithm controls the several parts of a human‐like articulated figure separetely through the minimum number of degrees‐of‐freedom. Using this approach, our algorithm simulates realistic human motions at efficient computational cost. Unlike existing dynamic simulation systems, our method assumes that input motion is already realistic, and is aimed at dynamically changing the input motion in real‐time only when unexpected physical input is applied to the figure. As such, our method works efficiently in the framework of current computer games.


Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596) | 2001

Real-time cloth simulation with sparse particles and curved faces

Masaki Oshita; Akifumi Makinouchi

In this paper, we present a novel technique for real-time cloth simulation. The method combines dynamic simulation and geometric techniques. Only a small number of particles (a few hundred at maximum) are controlled using dynamic simulation to simulate global cloth behaviors such as waving and bending. The cloth surface is then smoothed based on the elastic forces applied to each particle and the distance between each pair of adjacent particles. Using this geometric smoothing, local cloth behaviors such as twists and wrinkles are efficiently simulated. The proposed method is very simple, and is easy to implement and integrate with existing particle-based systems. We also describe a particle-based simulation system for efficient simulation with sparse particles. The proposed method has animated a skirt with rich details in real-time.


advances in computer entertainment technology | 2006

Motion-capture-based avatar control framework in third-person view virtual environments

Masaki Oshita

This paper presents a motion-capture-based control framework for third-person view virtual reality applications. Using motion capture devices, a user can directly control the full body motion of an avatar in virtual environments. In addition, using a third-person view, in which the user watches himself as an avatar on the screen, the user can sense his own movements and interactions with other characters and objects visually. However, there are still a few fundamental problems. First, it is difficult to realize physical interactions from the environment to the avatar. Second, it is also difficult for the user to walk around virtual environments because the motion capture area is very small compared to the virtual environments. This paper proposes a novel framework to solve these problems. We propose a tracking control framework in which the avatar is controlled so as to track input motion from a motion capture device as well as system generated motion. When an impact is applied to the avatar, the system finds an appropriate reactive motion and controls the weights of two tracking controllers in order to realize realistic and also controllable reactions. In addition, when the user walks in position, the system generates a walking motion for the controller to track. The walking speed and turn angle are also controlled through the users walking gestures. Using our framework, the system generates seamless transitions between user controlled motions and system generated motions. In this paper, we also introduce a prototype application including a simplified optical motion capture system.


sketch based interfaces and modeling | 2004

Pen-to-mime: a pen-based interface for interactive control of a human figure

Masaki Oshita

This paper presents a pen-based intuitive interface to control a virtual human figure interactively. Recent commercial pen devices can detect not only the pen positions but also the pressure and tilt of the pen. We utilize such information to make a human figure perform various types of motions in response to the pen movements manipulated by the user. A figure walks, runs, turns and steps along the trajectory and speed of the pen. The figure also bends, stretches and tilts in response to the tilt of the pen. Moreover, it ducks and jumps in response to the pen pressure. Using this interface, the user controls a virtual human figure intuitively as if he or she were holding a virtual puppet and playing with it. In addition to the interface design, this paper describes a motion generation engine to produce various motions based on the parameters that are given by the pen interface. We take a motion blending approach and construct motion blending modules with a set of small number of motion capture data for each type of motions. Finally, we discuss about the effectiveness and limitations of the interface based on some preliminary experiments.


Computer Graphics Forum | 2008

Smart Motion Synthesis

Masaki Oshita

Creating long motion sequences is a time‐consuming task even when motion capture equipment or motion editing tools are used. In this paper, we propose a system for creating a long motion sequence by combining elementary motion clips. The user is asked to first input motions on a timeline. The system then automatically generates a continuous and natural motion. Our system employs four motion synthesis methods: motion transition, motion connection, motion adaptation, and motion composition. Based on the constraints between the feet of the animated character and the ground, and the timing of the input motions, the appropriate method is determined for each pair of overlapped or sequential motions. As the user changes the arrangement of the motion clips, the system interactively changes the output motion. Alternatively, the user can make the system execute an input motion as soon as possible so that it follows the previous motion smoothly. Using our system, users can make use of existing motion clips. Because the entire process is automatic, even novices can easily use our system. A prototype system demonstrates the effectiveness of our approach.


international symposium on visual computing | 2010

Automatic learning of gesture recognition model using SOM and SVM

Masaki Oshita; Takefumi Matsunaga

In this paper, we propose an automatic learning method for gesture recognition. We combine two different pattern recognition techniques: the Self-Organizing Map (SOM) and Support Vector Machine (SVM). First, we apply the SOM to divide the sample data into phases and construct a state machine. Next, we apply the SVM to learn the transition conditions between nodes. An independent SVM is constructed for each node. Of the various pattern recognition techniques for multi-dimensional data, the SOM is suitable for categorizing data into groups, and thus it is used in the first process. On the other hand, the SVM is suitable for partitioning the feature space into regions belonging to each class, and thus it is used in the second process. Our approach is unique and effective for multi-dimensional and time-varying gesture recognition. The proposed method is a general gesture recognition method that can handle any kinds of input data from any input device. In the experiment presented in this paper, we used two Nintendo Wii Remote controllers, with three-dimensional acceleration sensors, as input devices. The proposed method successfully learned the recognition models of several gestures.


smart graphics | 2009

Sketch-Based Interface for Crowd Animation

Masaki Oshita; Yusuke Ogiwara

In this paper, we propose a novel interface for controlling crowd animation. Crowd animation is widely used in movie production and computer games. However, to make an intended crowd animation, a lot of agent model parameters have to be tuned through trial and error. Our method estimates crowd parameters based on a few example paths given by a user through a sketch-based interface. The parameters include guiding paths, moving speed, distance between agents, and adjustments of the distance (crowd regularity). Based on the computed parameters, a crowd animation is generated using an agent model. We demonstrate the effectiveness of our method through our experiments.


international conference on computer graphics and interactive techniques | 2012

Gamepad vs. touchscreen: a comparison of action selection interfaces in computer games

Masaki Oshita; Hirotaka Ishikawa

In this paper, we compare gamepad and touchscreen interfaces for action selection tasks in computer games. Touchscreens are now widely used for computer games on tablets, smartphones and hand-held game consoles. However, in general, game players are considered to prefer a gamepad over a touchscreen. The motivation for this research is to compare gamepad and touchscreen interfaces. Our results show that the touchscreen interface achieved better than or similar results to the gamepad interface. We believe that our results can provide a guideline for choosing and designing interfaces for computer games.


The Visual Computer | 2010

Generating animation from natural language texts and semantic analysis for motion search and scheduling

Masaki Oshita

This paper presents an animation system that generates an animation from natural language texts such as movie scripts or stories. It also proposes a framework for a motion database that stores numerous motion clips for various characters. We have developed semantic analysis methods to extract information for motion search and scheduling from script-like input texts. Given an input text, the system searches for an appropriate motion clip in the database for each verb in the input text. Temporal constraints between verbs are also extracted from the input text and are used to schedule the motion clips found. In addition, when necessary, certain automatic motions such as locomotion, taking an instrument, changing posture, and cooperative motions are searched for in the database. An animation is then generated using an external motion synthesis system. With our system, users can make use of existing motion clips. Moreover, because it takes natural language text as input, even novice users can use our system.


cyberworlds | 2009

Generating Animation from Natural Language Texts and Framework of Motion Database

Masaki Oshita

This paper presents an animation system that generates an animation from natural language texts such as movie scripts or stories. We also propose the framework of a motion database that stores many motion clips for various characters. When an input text is given, the system searches for an appropriate motion clip from the database for each verb in the input text. Temporal constraints between verbs are also extracted from the input text. The searched motion clips are scheduled based on these temporal constraints. In addition, when necessary, some automatic motions such as locomotion, taking an instrument, changing posture, and cooperative motions are searched from the database. An animation is then generated using an external motion synthesis system. Using our system, users can make use of existing motion clips. Moreover, because it takes natural language text as input, even novice users can use our system.

Collaboration


Dive into the Masaki Oshita's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuta Senju

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aoi Honda

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nik Ismail

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Syun Morishige

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohd Shahrizal Sunar

Universiti Teknologi Malaysia

View shared research outputs
Researchain Logo
Decentralizing Knowledge