Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naoya Iwamoto is active.

Publication


Featured researches published by Naoya Iwamoto.


Computer Graphics Forum | 2015

Multi-layer Lattice Model for Real-Time Dynamic Character Deformation

Naoya Iwamoto; Hubert P. H. Shum; Longzhi Yang; Shigeo Morishima

Due to the recent advancement of computer graphics hardware and software algorithms, deformable characters have become more and more popular in real‐time applications such as computer games. While there are mature techniques to generate primary deformation from skeletal movement, simulating realistic and stable secondary deformation such as jiggling of fats remains challenging. On one hand, traditional volumetric approaches such as the finite element method require higher computational cost and are infeasible for limited hardware such as game consoles. On the other hand, while shape matching based simulations can produce plausible deformation in real‐time, they suffer from a stiffness problem in which particles either show unrealistic deformation due to high gains, or cannot catch up with the body movement. In this paper, we propose a unified multi‐layer lattice model to simulate the primary and secondary deformation of skeleton‐driven characters. The core idea is to voxelize the input character mesh into multiple anatomical layers including the bone, muscle, fat and skin. Primary deformation is applied on the bone voxels with lattice‐based skinning. The movement of these voxels is propagated to other voxel layers using lattice shape matching simulation, creating a natural secondary deformation. Our multi‐layer lattice framework can produce simulation quality comparable to those from other volumetric approaches with a significantly smaller computational cost. It is best to be applied in real‐time applications such as console games or interactive animation creation.


international conference on computer graphics theory and applications | 2015

Dance Motion Segmentation Method based on Choreographic Primitives

Narumi Okada; Naoya Iwamoto; Tsukasa Fukusato; Shigeo Morishima

Data-driven animation using a large human motion database enables the programing of various natural human motions. While the development of a motion capture system allows the acquisition of realistic human motion, segmenting the captured motion into a series of primitive motions for the construction of a motion database is necessary. Although most segmentation methods have focused on periodic motion, e.g., walking and jogging, segmenting non-periodic and asymmetrical motions such as dance performance, remains a challenging problem. In this paper, we present a specialized segmentation approach for human dance motion. Our approach consists of three steps based on the assumption that human dance motion is composed of consecutive choreographic primitives. First, we perform an investigation based on dancer perception to determine segmentation components. After professional dancers have selected segmentation sequences, we use their selected sequences to define rules for the segmentation of choreographic primitives. Finally, the accuracy of our approach is verified by a user-study, and we thereby show that our approach is superior to existing segmentation methods. Through three steps, we demonstrate automatic dance motion synthesis based on the choreographic primitives obtained.


international conference on computer graphics and interactive techniques | 2011

Estimating fluid simulation parameters from videos

Naoya Iwamoto; Ryusuke Sagawa; Shoji Kunitomo; Shigeo Morishima

Recently, a video-based high quality 3D shape and motion modeling methods for fluid are proposed. [Huamin et al. 2009] However, this approach only aims to capture and generate original fluid action as it is.


advances in computer entertainment technology | 2017

DanceDJ: A 3D Dance Animation Authoring System for Live Performance

Naoya Iwamoto; Takuya Kato; Hubert P. H. Shum; Ryo Kakitsuka; Kenta Hara; Shigeo Morishima

Dance is an important component of live performance for expressing emotion and presenting visual context. Human dance performances typically require expert knowledge of dance choreography and professional rehearsal, which are too costly for casual entertainment venues and clubs. Recent advancements in character animation and motion synthesis have made it possible to synthesize virtual 3D dance characters in real-time. The major problem in existing systems is a lack of an intuitive interfaces to control the animation for real-time dance controls. We propose a new system called the DanceDJ to solve this problem. Our system consists of two parts. The first part is an underlying motion analysis system that evaluates motion features including dance features such as the postures and movement tempo, as well as audio features such as the music tempo and structure. As a pre-process, given a dancing motion database, our system evaluates the quality of possible timings to connect and switch different dancing motions. During run-time, we propose a control interface that provides visual guidance. We observe that disk jockeys (DJs) effectively control the mixing of music using the DJ controller, and therefore propose a DJ controller for controlling dancing characters. This allows DJs to transfer their skills from music control to dance control using a similar hardware setup. We map different motion control functions onto the DJ controller, and visualize the timing of natural connection points, such that the DJ can effectively govern the synthesized dance motion. We conducted two user experiments to evaluate the user experience and the quality of the dance character. Quantitative analysis shows that our system performs well in both motion control and simulation quality.


international conference on computer graphics and interactive techniques | 2014

The efficient and robust sticky viscoelastic material simulation

Kakuto Goto; Naoya Iwamoto; Shunsuke Saito; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 The Efficient and Robust Sticky Viscoelastic Material Simulation


international conference on computer graphics and interactive techniques | 2014

Material parameter editing system for volumetric simulation models

Naoya Iwamoto; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 Material Parameter Editing System for Volumetric Simulation Models


international conference on computer graphics and interactive techniques | 2013

Expressive dance motion generation

Narumi Okada; Kazuki Okami; Tsukasa Fukusato; Naoya Iwamoto; Shigeo Morishima

The power of expression such as accent in motion and movement of arms is an indispensable factor in dance performance because there is a large difference in appearance between natural dance and expressive motions. Needless to say, expressive dance motion makes a great impression on viewers. However, creating such a dance motion is challenging because most of the creators have little knowledge about dance performance. Therefore, there is a demand for a system that generates expressive dance motion with ease. Tsuruta et al. [2010] generated expressive dance motion by changing only the speed of input motion or altering joint angles. However, the power of expression was not evaluated with certainty, and the generated motion did not synchronize with music. Therefore, the generated motion did not always satisfy the viewers.


international conference on computer graphics and interactive techniques | 2013

Reflectance estimation of human face from a single shot image

Kazuki Okami; Naoya Iwamoto; Akinobu Maejima; Shigeo Morishima

Simulation of the reflectance of translucent materials is one of the most important factors in the creation of realistic CG objects. Estimating the reflectance characteristics of translucent materials from a single image is a very efficient way of re-rendering objects that exist in real environments. However, this task is considerably challenging because this approach leads to problems such as the existence of many unknown parameters. Munoz et al. [2011] proposed a method for the estimation of the bidirectional surface scattering reflectance distribution function (BSSRDF) from a given single image. However, it is difficult or impossible to estimate the BSSRDF of materials with complex shapes because this methods target was the convexity of objects therefore, it used a rough depth recovery technique for global convex objects. In this paper, we propose a method for accurately estimating the BSSRDF of human faces, which have complex shapes. We use a 3D face reconstruction technique to satisfy the above assumption. We are able to acquire more accurate geometries of human faces, and it enables us to estimate the reflectance characteristics of faces.


international conference on computer graphics and interactive techniques | 2012

Hair motion capturing from multiple view videos

Tsukasa Fukusato; Naoya Iwamoto; Shoji Kunitomo; Hirofumi Suda; Shigeo Morishima

To create a realistic virtual human, hair animation is an indispensable factor. Many hair simulation methods have been proposed so far, but simulating realistic hair motion such as considering an effect of turbulent flow and friction among enormous amount of hairs is still one of the challenging phenomena. Thus, to reproduce hair motion which includes these desirable features, capturing real hair motion has advantages compared with simulated one. Ishikawa et al. [2007] used a motion capture system and tracked some reflective makers placed on some strands. They successfully reproduce hair motions including an effect of turbulent flow, but since they put on some markers which have weight and only captured sparse strands which mean friction among others is ignored, it is still far from real hair motion.


Journal of the Geological Society of Japan | 2004

Bottom environmental changes during the past 100 years in Kitanada Bay, Ehime Prefecture, South-west Japan

Atsuko Amano; Takahiko Inoue; Naoya Iwamoto; Fujihiko Shioya; Yoshio Inouchi

Collaboration


Dive into the Naoya Iwamoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge