Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masahide Kawai is active.

Publication


Featured researches published by Masahide Kawai.


international conference on computer graphics and interactive techniques | 2015

Wrinkles individuality representing aging simulation

Pavel A. Savkin; Daiki Kuwahara; Masahide Kawai; Takuya Kato; Shigeo Morishima

An appearance of a human face changes due to aging: sagging, spots, lusters, and wrinkles would be observed. Therefore, facial aging simulation techniques are required for long-term criminal investigation. While the appearance of an aged face varies greatly from person to person, wrinkles are one of the most important features which represent the human individuality. An individuality of wrinkles is defined by wrinkles shape and position.


conference on multimedia modeling | 2015

FOCUSING PATCH: Automatic Photorealistic Deblurring for Facial Images by Patch-Based Color Transfer

Masahide Kawai; Shigeo Morishima

Facial image synthesis creates blurred facial images almost without high-frequency components, resulting in flat edges. Moreover, the synthesis process results in inconsistent facial images, such as the conditions where the white part of the eye is tinged with the color of the iris and the nasal cavity is tinged with the skin color. Therefore, we propose a method that can deblur an inconsistent synthesized facial image, including strong blurs created by common image morphing methods, and synthesize photographic quality facial images as clear as an image captured by a camera. Our system uses two original algorithms: patch color transfer and patch-optimized visio-lization. Patch color transfer can normalize facial luminance values with high precision, and patch-optimized visio-lization can synthesize a deblurred, photographic quality facial image. The advantages of our method are that it enables the reconstruction of the high-frequency components (concavo-convex) of human skin and removes strong blurs by employing only the input images used for original image morphing.


international symposium on visual computing | 2014

Automatic Photorealistic 3D Inner Mouth Restoration from Frontal Images

Masahide Kawai; Tomoyori Iwao; Akinobu Maejima; Shigeo Morishima

In this paper, we propose a novel method to generate highly photorealistic three-dimensional (3D) inner mouth animation that is well-fitted to an original ready-made speech animation using only frontal captured images and a small-size database. The algorithms are composed of quasi-3D model reconstruction and motion control of teeth and the tongue, and final compositing of photorealistic speech animation synthesis tailored to the original.


international conference on computer graphics and interactive techniques | 2015

Automatic synthesis of eye and head animation according to duration and point of gaze

Hiroki Kagiyama; Masahide Kawai; Daiki Kuwahara; Takuya Kato; Shigeo Morishima

In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.


Journal of Information Processing | 2015

Automatic Generation of Photorealistic 3D Inner Mouth Animation only from Frontal Images

Masahide Kawai; Tomoyori Iwao; Akinobu Maejima; Shigeo Morishima

In this paper, we propose a novel method to generate highly photorealistic three-dimensional (3D) inner mouth animation that is well-fitted to an original ready-made speech animation using only frontal captured images and small-size databases. The algorithms are composed of quasi-3D model reconstruction and motion control of teeth and the tongue, and final compositing of photorealistic speech animation synthesis tailored to the original. In general, producing a satisfactory photorealistic appearance of the inner mouth that is synchronized with mouth movement is a very complicated and time-consuming task. This is because the tongue and mouth are too flexible and delicate to be modeled with the large number of meshes required. Therefore, in some cases, this process is omitted or replaced with a very simple generic model. Our proposed method, on the other hand, can automatically generate 3D inner mouth appearances by improving photorealism with only three inputs: an original tailor-made lip-sync animation, a single image of the speaker’s teeth, and a syllabic decomposition of the desired speech. The key idea of our proposed method is to combine 3D reconstruction and simulation with two-dimensional (2D) image processing using only the above three inputs, as well as a tongue database and mouth database. The satisfactory performance of our proposed method is illustrated by the significant improvement in picture quality of several tailor-made animations to a degree nearly equivalent to that of camera-captured videos.


international conference on computer graphics and interactive techniques | 2014

Automatic deblurring for facial image based on patch synthesis

Masahide Kawai; Tomoyori Iwao; Akinobu Maejima; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2958-3/14/08 Automatic Deblurring for Facial Image Based on Patch Synthesis


international conference on computer graphics and interactive techniques | 2014

Example-based blendshape sculpting with expression individuality

Takuya Kato; Shunsuke Saito; Masahide Kawai; Tomoyori Iwao; Akinobu Maejima; Shigeo Morishima

classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Figure 1. Monkey blendshapes created from human source model. Blendshape sculpted by our method has less vertex errors compared with coarsely sculpted blendshape, blendshape generated by Deformation Transfer [Sumner et al. 2004] for instance.


Journal of Information Processing | 2014

Data-Driven Speech Animation Synthesis Focusing on Realistic Inside of the Mouth

Masahide Kawai; Tomoyori Iwao; Daisuke Mima; Akinobu Maejima; Shigeo Morishima


international conference on computer graphics and interactive techniques | 2013

Photorealistic inner mouth expression in speech animation

Masahide Kawai; Tomoyori Iwao; Daisuke Mima; Akinobu Maejima; Shigeo Morishima


Archive | 2013

Video-Realistic Inner Mouth Reanimation

Masahide Kawai; Tomoyori Iwao; Akinobu Maejima; Shigeo Morishima

Collaboration


Dive into the Masahide Kawai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge