Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masataka Seo is active.

Publication


Featured researches published by Masataka Seo.


symposium on information and communication technology | 2014

Quantitative assessment of facial paralysis using local binary patterns and Gabor filters

Masataka Seo; Yen-Wei Chen; Naoki Matsushiro

Facial paralysis is a common clinical condition with the rate from 20 to 25 patients per 100,000 people per year. An objectively quantitative tool to support for medical diagnostics is very necessary and important. This paper proposes a very robust method that overcomes the drawbacks of other techniques to develop this tool. In our research, we use a combination of local binary patterns (LBP) and Gabor filters to calculate the features that are used for training and testing. Experiments show that our results outperform other techniques testing on a dynamic facial expression database.


intelligent information hiding and multimedia signal processing | 2009

Face Image Metamorphosis with an Improved Multilevel B-Spline Approximation

Masataka Seo; Yen-Wei Chen

Image metamorphosis is an important visual effect tool in entertainment industry and other research field. To date many morphing algorithms have been proposed and used such as mesh warping, radial basis functions, thin plate splines and B-Splines (free-form deformations). In particular, the free-form deformation based on B-Spline approximation is a powerful and useful morphing algorithm and is proven to have the one-to-one property which can prevent the warped image from folding back upon itself. In order to minimize the morphing error, a multilevel B-splene approximation has been proposed [1]. Though a natural and smooth morphed image can be obtained by using the multilevel B-splene approximation, it takes a large computation cost. In this paper, we proposed an improved multilevel B-splene approximation method in order to reduce the large computation cost. We propose a lattice integration method and an adaptive lattice method for multilevel approaches. The experimental results show that the proposed method is more efficient than conventional multilevel B-spline method.


international conference on image processing | 2016

Quantitative analysis of facial paralysis based on three-dimensional features

Yen-Wei Chen; Masataka Seo; Naoki Matsushiro; Wei Xiong

Objective evaluation of disease is one of the desirable goals in medicine. This paper presents a technique for the objective evaluation of facial paralysis, in which features are extracted based on landmarks positions in three-dimensional space (3D-landmarks). The landmarks are initialized manually in the first frontal frame and are tracked in the subsequent frontal frames. Then, the landmarks positions are reconstructed in 3D-space using multiview images and a camera self-calibration technique. From the 3D-landmarks, the features are extracted in 3D-space (called 3D-features) and used for classification. These 3D-features may contain enhanced information such as depth information and, therefore, may help improve the accuracy rates of predicted scores. In addition, our method uses the camera self-calibration technique for estimating the cameras parameters, and does not use laser scanning for 3D reconstruction, so it is more flexible to set up and safer for the patient. For overall evaluation, experiments showed that our technique achieved superior results to other methods.


international conference on image processing | 2013

Reconstruction of 3D dynamic expressions from single facial image

Shunya Osawa; Guifang Duan; Masataka Seo; Takanori Igarashi; Yen-Wei Chen

Recently automatic facial expression analysis and recognition is rapidly gaining more and more interest in the field of computer vision. The capture and construction of 3D dynamic expressions often take large time and need specialized hardware, which limits its possible applications. In this paper, we try to reconstruct 3D dynamic expression images from single 2D facial image. The proposed method is based on statistical learning, where multiple subspaces are learned and support for 3D dynamic expression generation. The results show that the proposed method can effectively generate 3D dynamic expressions using only one input 2D facial image.


fuzzy systems and knowledge discovery | 2015

Quantitative analysis of facial paralysis based on filters of concentric modulation

Masataka Seo; Naoki Matsushiro; Yen-Wei Chen

Facial paralysis is a common disease occurring with annual patient rate of 25 to 35/100 000. The symptom of the disease is that the patients loose or decrease facial movement ability. It is useful if there is an effective method to objective evaluation of facial paralysis degree. This paper presents a method to develop this tool based on filtered images. In our work, we propose a filter to extract useful isotropic frequencies of images in local spaces and remove unnecessary frequency components before feature extraction. The filter function is the modulation of an isotropic Gaussian function by a radial sinusoidal function. The interesting characteristic of this filter is that the passbands are the same for all orientations. This may be useful in some cases such as quantifying the degree of facial palsy. In this work, the measurement of symmetry and asymmetry between two sides of the face is performed on filtered images, and then the measured information is used for classification. Experiments have shown that with the use of our filtered technique, it gives superior results than the other methods testing on an available database of Osaka Police Hospital.


international congress on image and signal processing | 2016

Joint subspace learning for reconstruction of 3D facial dynamic expression from single image

Masataka Seo; Yen-Wei Chen

Recently, the synthesis of 3D dynamic expressions has become an important concern in computer graphics, facial recognition, etc. In this study, we propose a regression based joint subspace learning method for the automatic synthesis of 3D dynamic expression images. This method synthesizes 3D dynamic expression images from a single 2D facial image. We use two subspaces (the view subspace and the frame subspace) to synthesize a 3D image. First, we use the view subspace to estimate multi-view facial images from a front image. Next, we construct a 3D image using the estimated multi-view facial images. Finally, we estimate the 3D images in different frames by using the frame subspace to synthesis 3D dynamic expression images. This approach is unlike the conventional joint subspace learning in which, the coefficients estimated by the input image are directly used for synthesis. Furthermore, we propose using textural information to improve the accuracy of synthesized images.


international congress on image and signal processing | 2016

Automatic feature point detection using deep convolutional networks for quantitative evaluation of facial paralysis

Hiroki Yoshihara; Masataka Seo; Naoki Matsushiro; Yen-Wei Chen

Feature point detection is an important pre-processing step for quantitative evaluation of facial paralysis. Since the conventional methods such as active shape model (ASM) or active appearance model (AAM) are trained by using normal face and they are not possible to detect the feature points accurately for the face with paralysis. In this paper, we propose an automatic and accurate feature point detection method for quantitative evaluation of facial paralysis using deep convolutional neural networks (DCNN). The proposed method consists of two steps. We first use AAM for initial feature point detection. In the second step, a patch with the detected point at the center is used as an input of DCNN for refinement. Experiments demonstrated that the proposed method can significantly improve the detection accuracy of the conventional AAM.


international conference on pattern recognition | 2016

Quantitative analysis of facial paralysis based on limited-orientation modified circular Gabor filters

Masataka Seo; Naoki Matsushiro; Wei Xiong; Yen-Wei Chen

The diagnosis of disease with the aid of computer programs has been developing more and more in recent years. This paper presents an approach which is based on frequency technique for the objective quantitative analysis of facial paralysis. In this method, limited-orientation modified circular Gabor filters (LO-MCGFs) are used to enhance the desirable frequencies in images. Then, features are extracted from the filtered images for classification. The first advantage of the LO-MCGF is that its inner passbands are uniform, so it helps remove noise and control frequencies more effectively. The second benefit is that the LO-MCGF utilizes the existing robust characteristics of circular Gabor filter for rotation invariant texture regions. Hence, the LO-MCGF-based technique improves remarkably the accuracies of score estimation for some expressions whose local textures are invariant in rotation. Finally, the limited filtered regions, or limited propagation orientations, help the LO-MCGF focus on only some specific spaces. Therefore, the LO-MCGF can avoid the influences of irrelevant regions. In other words, it improves the spatial localization. For overall evaluation, experiments show that our proposed method is superior to other contemporary techniques tested on a dynamic facial expression database.


The 2015 IEEE RIVF International Conference on Computing & Communication Technologies - Research, Innovation, and Vision for Future (RIVF) | 2015

Quantitative evaluation of facial paralysis using tracking method

Masataka Seo; Yen-Wei Chen; Naoki Matsushiro

Facial paralysis is a common clinical condition with the rate from 20 to 25 patients per 100,000 people per year. An objectively quantitative tool to support for medical diagnostics is very necessary and important. This paper proposes a very simple, visual, and highly efficient method that overcomes the drawbacks of other methods to develop this tool. In our research, we use the tracking of interest points to measure the features that are used for training and testing. Experiments show that our method outperforms other techniques testing on a dynamic facial expression database.


biomedical engineering and informatics | 2013

Optimal color space for quantitative analysis of shinny skin

Takahiro Naoki; Masataka Seo; Takanori Igarashi; Yen-Wei Chen

This paper explores the optimal color space for quantitative analysis of shinny skin. By assuming that the forehead areas hold shine and the outside of cheek areas have no shine, we clip forehead and cheek areas from face images, and then transform them into 4 color spaces: RGB, HSV, YIQ and YCbCr for analysis. Through calculating the statistic values of the clipped image patches, the rate of average difference and the variance difference between shinny area and non-shinny area can be produced. For quantitative evaluation, we calculate correlation between these statistic values and scores of sensory evaluation. By analyzing these correlations, we investigate the optimal color space for quantitative analysis of shinny skin.

Collaboration


Dive into the Masataka Seo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Xiong

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Muta

Ritsumeikan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge