Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masaki Ishii is active.

Publication


Featured researches published by Masaki Ishii.


international symposium on neural networks | 2007

Orientation Selectivity for Representation of Facial Expression Changes

Hirokazu Madokoro; Kazuhito Sato; Masaki Ishii

This paper presents a method for representation of facial expression changes using orientation selectivity of Gabor wavelets on Adaptive Resonance Theory (ART) networks, which are unsupervised and self-organizing neural networks that contain a stability-plasticity tradeoff. The classification ability of ART is controlled by a parameter called the attentional vigilance parameter. However, the networks often produce inclusions or redundant categories. The proposed method produces suitable vigilance parameters according to classification granularity using orientation selectivity. Moreover, the method can represent the appearance and disappearance of facial expression changes to detect dynamic, local, and topological feature changes from whole facial images.


Neural Computation | 2001

Linear Constraints on Weight Representation for Generalized Learning of Multilayer Networks

Masaki Ishii; Itsuo Kumazawa

In this article, we present a technique to improve the generalization ability of multilayer neural networks. The proposed method introduces linear constraints on weight representation based on the invariance natures of training targets. We propose a learning method that introduces effective linear constraints into an error function as a penalty term. Furthermore, introduction of such constraints leads to reduction of the VC dimension of neural networks. We show bounds on the VC dimension of the neural networks with such constraints. Finally, we demonstrate the effectiveness of the proposed method by some experiments.


Archive | 2012

Quantification of Emotions for Facial Expression: Generation of Emotional Feature Space Using Self-Mapping

Masaki Ishii; Toshio Shimodate; Yoichi Kageyama; Tsuyoshi Takahashi; Makoto Nishida

The shape (static diversity) and motion (dynamic diversity) of facial components, such as the eyebrows, eyes, nose, and mouth, manifest expression. From the viewpoint of static di‐ versity, owing to the individual variation in facial configurations, it is presumed that a facial expression pattern due to the manifestation of a facial expression includes subject-specific features. In addition, from the viewpoint of dynamic diversity, because the dynamic changes in facial expressions originate from subject-specific facial expression patterns, it is presumed that the displacement vector of facial components has subject-specific features.


Archive | 2009

Generation of Facial Expression Map using Supervised and Unsupervised Learning

Masaki Ishii; Kazuhito Sato; Hirokazu Madokoro; Makoto Nishida

Recently, studies of human face recognition have been conducted vigorously (Fasel & Luettin, 2003; Yang et al., 2002; Pantic & Rothkrantz, 2000a; Zhao et al., 2000; Hasegawa et al., 1997; Akamatsu, 1997). Such studies are aimed at the implementation of an intelligent man-machine interface. Especially, studies of facial expression recognition for humanmachine emotional communication are attracting attention (Fasel & Luettin, 2003; Pantic & Rothkrantz, 2000a; Tian et al., 2001; Pantic & Rothkrantz, 2000b; Lyons et al., 1999; Lyons et al., 1998; Zhang et al., 1998). The shape (static diversity) and motion (dynamic diversity) of facial components such as the eyebrows, eyes, nose, and mouth manifest expressions. Considering facial expressions from the perspective of static diversity because facial configurations differ among people, it is presumed that a facial expression pattern appearing on a face when facial expression is manifested includes person-specific features. In addition, from the viewpoint of dynamic diversity, because the dynamic change of facial expression originates in a person-specific facial expression pattern, it is presumed that the displacement vector of facial components has person-specific features. The properties of the human face described above reveal the following tasks. The first task is to generalize a facial expression recognition model. Numerous conventional approaches have attempted generalization of a facial expression recognition model. They use the distance of motion of feature points set on a face and the motion vectors of facial muscle movements in its arbitrary regions as feature values. Typically, such methods assign that information to so-called Action Units (AUs) of a Facial Action Coding System (FACS) (Ekman & Friesen, 1978). In fact, AUs are described qualitatively. Therefore, no objective criteria pertain to the setting positions of feature points and regions. They all depend on a particular researcher’s experience. However, features representing facial expressions are presumed to differ among subjects. Accordingly, a huge effort is necessary to link quantitative features with qualitative AUs for each subject and to derive universal features therefrom. It is also suspected that a generalized facial expression recognition model that is applicable to all subjects would disregard person-specific features of facial expressions that are borne originally by each subject. For all the reasons described above, it is an important task to establish a method to extract person-specific features using a common approach to every subject, and to build a facial expression recognition model that incorporates these features. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg


Journal of Multimedia | 2008

Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

Masaki Ishii; Kazuhito Sato; Hirokazu Madokoro; Makoto Nishida

This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap) using the Self-Organizing Maps (SOM) of unsupervised learning and Counter Propagation Networks (CPN) of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear) is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.


systems, man and cybernetics | 2007

Generation of facial expression map based on topological characteristics of face images

Masaki Ishii; Kazuhito Sato; Hirokazu Madokoro; Makoto Nishida

This paper proposes a generation method of a facial expression map (FEMap) using a self-organizing maps (SOM) of unsupervised learning and counter propagation networks (CPN) of supervised learning. First, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space. The number of subject-specific facial expression categories and the representative images of each category are generated. Next, these images are learned using the CPN of a large mapping space. A category map that expresses the topological characteristics of facial expression is generated. Finally, psychological significance based on a neutral expression and six basic emotions is assigned to each category. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.


computational intelligence for modelling, control and automation | 2005

Training Data Modeling Using Counter Propagation Networks for Improved Generalization Abilities

Hirokazu Madokoro; Kazuhito Sato; Masaki Ishii

This paper presents a new method for improved generalization abilities of back propagation networks (BPNs). The method is based on topological data mapping used in counter propagation networks (CPNs). The CPNs save input data into a category map while retaining topological data structures. We used weights and labels of the category map for new training data of the BPN Our method provides the following benefits: 1) the number of training data can be controlled by changing category map sizes; 2) interpolation training data can be produced under the topological space; and 3) overlapping training data can be avoided through the use of Winner-Take-All competition. Experimental results show that expanded training data improved the generalization ability


Systems and Computers in Japan | 2003

Acquisition of world images and self-localization estimation using viewing image sequences

Hirokazu Madokoro; Kazuhito Sato; Masaki Ishii

We propose a method for robotic self-localization estimation that can process position information without identifying special landmarks. This method, which combines viewpoint shifts with visual information about the environment, makes it possible for a robot to move both autonomously and purposefully. In our procedures, the robot surveys the landscape of an environment from multiple directions, obtaining self-localization from a viewing image sequence. The robot is made aware of changes in the landscape via self-organizing maps (SOM), which generate “concept patterns”. By making the SOM hierarchical, these concept patterns can be consolidated. This allows the robot to move both autonomously and purposefully in the environment toward a position by using previously collected information. By performing a travel experiment with the robot in an indoor environment, in which characteristic concept patterns recording topologies had been previously generated at various positions during a learning period, we confirmed that a correct self-localization estimate can be generated from landscape changes detected via viewpoint shifts.


Systems and Computers in Japan | 2003

Introduction of linear constraints on the weight representation of multilayer networks for generalization and application to character recognition

Masaki Ishii; Itsuo Kumazawa

In the application of layered neural networks to practical problems, a high generalization power is required. This paper discusses a method of improving the generalization power of neural networks. The knowledge of the object to be learned is assumed to include the fact that the output function of the object to be learned remains invariant for a certain range of input pattern variation. An attempt is made to improve the generalization power by reflecting this invariant property in the weighting of the neural network. It is shown for the case in which the variation of the input pattern can be represented by a linear transformation that a sufficient condition for the neural network to have such invariance is that the linear dependency constraint is introduced into the weight expression. A learning process is proposed in which this kind of constraint for the weight expression is introduced into the evaluation function as an additional term. The proposed method can be considered as a generalization of the method in which deletion learning methods such as the weight decay method and the structural learning method are included as special cases. There has been discussion of the relation between the generalization power and the VC dimension. The improvement of the proposed method can be evaluated, by introducing a linear constraint into the weight expression, based on the reduction of the VC dimension. Lastly, results are presented for an experiment in which the proposed method is applied to the character recognition problem.


international symposium on neural networks | 2001

Invariant learning of multilayer networks for generalization

Masaki Ishii; Itsuo Kumazawa

The purpose of this paper is to improve generalization ability of multilayer neural networks. Our approach is to construct a neural network whose outputs are invariant with respect to some transformations of input patterns. We present an error function to incorporate invariances into a neural network through training. A special case of the proposed method can be considered as tangent prop algorithm. Finally, we show some experimental results to demonstrate the effectiveness of the proposed method.

Collaboration


Dive into the Masaki Ishii's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazuhito Sato

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Itsuo Kumazawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Akira Takaku

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiro Shimizu

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Norimasa Okui

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge