Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manabu Nii is active.

Publication


Featured researches published by Manabu Nii.


Fuzzy Sets and Systems | 2001

Fuzzy regression using asymmetric fuzzy coefficients and fuzzified neural networks

Hisao Ishibuchi; Manabu Nii

In this paper, first we explain several versions of fuzzy regression methods based on linear fuzzy models with symmetric triangular fuzzy coefficients. Next we point out some limitations of such fuzzy regression methods. Then we extend the symmetric triangular fuzzy coefficients to asymmetric triangular and trapezoidal fuzzy numbers. We show that the limitations of the fuzzy regression methods with the symmetric triangular fuzzy coefficients are remedied by such extension. Several formulations of linear programming problems are proposed for determining asymmetric fuzzy coefficients from numerical data. Finally, we show how fuzzified neural networks can be utilized as nonlinear fuzzy models in fuzzy regression. In the fuzzified neural networks, asymmetric fuzzy numbers are used as connection weights. The fuzzy connection weights of the fuzzified neural networks correspond to the fuzzy coefficients of the linear fuzzy models. Nonlinear fuzzy regression based on the fuzzified neural networks is illustrated by computer simulations where Type I and Type II membership functions are determined from numerical data.


Fuzzy Sets and Systems | 2001

Numerical analysis of the learning of fuzzified neural networks from fuzzy if—then rules

Hisao Ishibuchi; Manabu Nii

The main aim of this paper is to clearly show how fuzzified neural networks are trained by back-propagation-type learning algorithms for approximately realizing fuzzy if–then rules. Our fuzzified neural network is a three-layer feedforward neural network where connection weights are fuzzy numbers. A set of fuzzy if–then rules is used as training data for the learning of our fuzzified neural network. That is, inputs and targets are linguistic values such as “small” and “large”. In this paper, we first demonstrate that the fuzziness in training data propagates backward in our fuzzified neural network. Next we examine the ability of our fuzzified neural network to approximately realize fuzzy if–then rules. In computer simulations, we compare four types of connection weights (i.e., real numbers, symmetric triangular fuzzy numbers, asymmetric triangular fuzzy numbers, and asymmetric trapezoidal fuzzy numbers) in terms of the fitting ability to training data and the computation time. We also examine a partially fuzzified neural network. In our partially fuzzified neural network, connection weights and biases to output units are fuzzy numbers while those to hidden units are real numbers. Simulation results show that such a partially fuzzified neural network is a good hybrid architecture of fully fuzzified neural networks and neural networks with non-fuzzy connection weights.


Archive | 2001

Genetic-Algorithm-Based Instance and Feature Selection

Hisao Ishibuchi; Tomoharu Nakashima; Manabu Nii

This chapter discusses a genetic-algorithm-based approach for selecting a small number of instances from a given data set in a pattern classification problem. Our genetic algorithm also selects a small number of features. The selected instances and features are used as a reference set in a nearest neighbor classifier. Our goal is to improve the classification ability of our nearest neighbor classifier by searching for an appropriate reference set. We first describe the implementation of our genetic algorithm for the instance and feature selection. Next we discuss the definition of a fitness function in our genetic algorithm. Then we examine the classification ability of nearest neighbor classifiers designed by our approach through computer simulations on some data sets. We also examine the effect of the instance and feature selection on the learning of neural networks. It is shown that the instance and feature selection prevents the overfitting of neural networks.


international symposium on neural networks | 1997

Linguistic rule extraction from neural networks and genetic-algorithm-based rule selection

Hisao Ishibuchi; Manabu Nii; Tadahiko Murata

This paper proposes a hybrid approach to the design of a compact fuzzy rule-based classification system with a small number of linguistic rules. The proposed approach consists of two procedures: rule extraction from a trained neural network and rule selection by a genetic algorithm. We first describe how linguistic rules can be extracted from a multilayer feedforward neural network that has been already trained for a classification problem with many continuous attributes. In our rule extraction procedure, a linguistic input vector corresponding to the antecedent part of a linguistic rule is presented to the trained neural network, and the fuzzy output vector front the trained neural network is examined for determining the consequent part and the grade of certainty of that linguistic rule. Next we explain how a genetic algorithm can be utilized for selecting a small number of significant linguistic rules from a large number of extracted rules. Our rule selection problem has two objectives: to minimize the number of selected linguistic rules and to maximize the number of correctly classified patterns by the selected linguistic rules. A multi-objective genetic algorithm is employed for finding a set of non-dominated solutions with respect to these two objectives. Finally we illustrate our hybrid approach by computer simulations on real-world test problems.


Fuzzy Sets and Systems | 2000

Neural networks for soft decision making

Hisao Ishibuchi; Manabu Nii

This paper discusses various techniques for soft decision making by neural networks. Decision making problems are described as choosing an action from possible alternatives using available information. In the context of soft decision making, a single action is not always chosen. When it is difficult to choose a single action based on available information, the decision is withheld or a set of promising actions is presented to human users. The ability to handle uncertain information is also required in soft decision making. In this paper, we handle decision making as a classification problem where an input pattern is classified as one of given classes. Class labels in the classification problem correspond to alternative actions in decision making. In this paper, neural networks are used as classification systems, which eventually could be implemented as a part of decision making systems. First we focus on soft decision making by trained neural networks. We assume that the learning of a neural network has already been completed. When a new pattern cannot be classified as a single class with high certainty by the trained neural network, the classification of such a new pattern is rejected. After briefly describing rejection methods based on crisp outputs from the trained neural network, we propose an interval-arithmetic-based rejection method with interval input vectors, and extend it to the case of fuzzy input vectors. Next we describe the learning of neural networks for possibility analysis. The aim of possibility analysis is to present a set of possible classes of a new pattern to human users. Then we describe the learning of neural networks from training patterns with uncertainty. Such training patterns are denoted by interval vectors and fuzzy vectors. Finally we examine the performance of various soft decision making methods described in this paper by computer simulations on commonly used data sets in the literature.


joint ifsa world congress and nafips international conference | 2001

Learning of neural networks with GA-based instance selection

Hisao Ishibuchi; Tomoharu Nakashima; Manabu Nii

We examine the effect of instance and feature selection on the generalization ability of trained neural networks for pattern classification problems. Before the learning of neural networks, a genetic-algorithm-based instance and feature selection method is applied for reducing the size of training data. Nearest neighbor classification is used for evaluating the classification ability of subsets of training data in instance and feature selection. Neural networks are trained by the selected subset (i.e., reduced training data). In this paper, we first explain our GA-based instance and feature selection method. Then we examine the effect of instance and feature selection on the generalization ability of trained neural networks through computer simulations on various artificial and real-world pattern classification problems.


international symposium on neural networks | 1996

Fuzzy regression analysis by neural networks with non-symmetric fuzzy number weights

Hisao Ishibuchi; Manabu Nii

In this paper, we first explain the fuzzy regression methods based on fuzzy linear models with symmetric triangular fuzzy number coefficients, and point out some drawbacks in such fuzzy regression methods. Next, we extend the fuzzy linear models to the case of non-symmetric fuzzy number coefficients. We illustrate that several drawbacks can be remedied by this extension. We then propose three methods of fuzzy nonlinear regression analysis using fuzzified neural networks with non-symmetric fuzzy number weights. One of the proposed nonlinear fuzzy regression methods is applied to the determination of type 2 membership functions.


ieee international conference on fuzzy systems | 1996

Learning of fuzzy connection weights in fuzzified neural networks

Hisao Ishibuchi; Manabu Nii

We examine how fuzzy connection weights are adjusted in fuzzified neural networks by various computer simulations. Our fuzzified neural networks are three-layer feedforward neural networks where connection weights are given as fuzzy numbers. The fuzzified neural networks can handle fuzzy numbers as inputs and targets. First, we examine how the fuzziness in training data propagates to the fuzziness of the connection weights by the learning of the fuzzified neural networks. Next, we examine the ability of the fuzzified neural networks to approximately realize fuzzy if-then rules. In computer simulations, we compare three types of connection weights: real numbers, symmetric triangular fuzzy numbers and non-symmetric trapezoidal fuzzy numbers. By computer simulations, it is demonstrated that the non-fuzzy neural networks with the real number connection weights do not work well for some test problems where the fuzziness of targets is much larger than the fuzziness of inputs. On the contrary, when the fuzziness of targets is much smaller than the fuzziness of inputs, the fuzzy connection weights are not necessary.


Archive | 2000

Fuzzy If-Then Rules for Pattern Classification

Hisao Ishibuchi; Tomoharu Nakashima; Manabu Nii

This chapter illustrates how fuzzy if-then rules can be used for pattern classification problems. First we describe a heuristic method for automatically generating fuzzy if-then rules for pattern classification problems from training patterns. The heuristic method uses a simple fuzzy grid for partitioning a pattern space into fuzzy subspaces. A fuzzy if-then rule is generated in each fuzzy subspace. Using the heuristic rule generation method, we examine some basic aspects of fuzzy rule-based classification systems such as the shape of membership functions, the definition of the compatibility grade, and the choice of a fuzzy reasoning method. Next we describe a fuzzy rule selection method for designing compact fuzzy rule-based systems with high classification ability. A small number of fuzzy if-then rules are selected from a large number of candidate rules by a genetic algorithm. Finally we describe two genetics-based machine learning algorithms for designing fuzzy rule-based systems for high-dimensional pattern classification problems. In those methods, fuzzy rule-based systems are evolved by genetic operations such as selection, crossover, and mutation. Simulation results on some well-known data sets are shown for illustrating our approaches to the design of fuzzy rule-based systems.


simulated evolution and learning | 1998

Learning from Linguistic Rules and Rule Extraction for Function Approximation by Neural Networks

Kimiko Tanaka; Manabu Nii; Hisao Ishibuchi

We have already shown that the relation between neural networks and linguistic knowledge is bidirectional for pattern classification problems. That is, neural networks are trained by given linguistic rules, and linguistic rules are extracted from trained neural networks. In this paper, we illustrate the bidirectional relation for function approximation problems. First we show how linguistic rules and numerical data can be simultaneously utilized in the learning of neural networks. In our learning scheme, antecedent and consequent linguistic values are specified by membership functions of fuzzy numbers. Thus each linguistic rule is handled as a fuzzy input-output pair. Next we show how linguistic rules can be extracted from trained neural networks. In our rule extraction method, linguistic values in the antecedent part of each linguistic rule are presented to a trained neural network for determining its consequent part. The corresponding fuzzy output from the trained neural network is calculated by fuzzy arithmetic. The consequent part of the linguistic rule is determining by comparing the fuzzy output with linguistic values. Finally we suggest some extensions of our rule extraction method.

Collaboration


Dive into the Manabu Nii's collaboration.

Top Co-Authors

Avatar

Hisao Ishibuchi

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Tomoharu Nakashima

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kimiko Tanaka

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Masahiro Takatani

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge