Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keinosuke Fukunaga is active.

Publication


Featured researches published by Keinosuke Fukunaga.


IEEE Transactions on Information Theory | 1975

The estimation of the gradient of a density function, with applications in pattern recognition

Keinosuke Fukunaga; Larry D. Hostetler

Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistency of the estimates. The results are generalized to obtain a simple mcan-shift estimate that can be extended in a k -nearest-neighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients.


IEEE Transactions on Computers | 1975

A Branch and Bound Algorithm for Computing k-Nearest Neighbors

Keinosuke Fukunaga; Patrenahalli M. Narendra

Computation of the k-nearest neighbors generally requires a large number of expensive distance computations. The method of branch and bound is implemented in the present algorithm to facilitate rapid calculation of the k-nearest neighbors, by eliminating the necesssity of calculating many distances. Experimental results demonstrate the efficiency of the algorithm. Typically, an average of only 61 distance computations were made to find the nearest neighbor of a test sample among 1000 design samples.


IEEE Transactions on Computers | 1970

Application of the Karhunen-Loève Expansion to Feature Selection and Ordering

Keinosuke Fukunaga; Warren L. G. Koontz

The Karhunen-Lo6ve expansion has been used previously to extract important features for representing samples taken from a given distribution. A method is developed herein to use the Karhunen-Loeve expansion to extract features relevant to classification of a sample taken from one of two pattern classes. Numerical examples are presented to illustrate the technique.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Effects of sample size in classifier design

Keinosuke Fukunaga; Raymond R. Hayes

The effect of finite sample-size on parameter estimates and their subsequent use in a family of functions are discussed. General and parameter-specific expressions for the expected bias and variance of the functions are derived. These expressions are then applied to the Bhattacharyya distance and the analysis of the linear and quadratic classifiers, providing insight into the relationship between the number of features and the number of training samples. Because of the functional form of the expressions, an empirical approach is presented to enable asymptotic performance to be accurately estimated using a very small number of samples. Results were experimentally verified using artificial data in controlled cases and using real, high-dimensional data. >


IEEE Transactions on Computers | 1971

An Algorithm for Finding Intrinsic Dimensionality of Data

Keinosuke Fukunaga; David R. Olsen

An algorithm for the analysis of multivariant data is presented along with some experimental results. The basic idea of the method is to examine the data in many small subregions, and from this determine the number of governing parameters, or intrinsic dimensionality. This intrinsic dimensionality is usually much lower than the dimensionality that is given by the standard Karhunen-Loève technique. An analysis that demonstrates the feasability of this approach is presented.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1983

Nonparametric Discriminant Analysis

Keinosuke Fukunaga; James M. Mantock

A nonparametric method of discriminant analysis is proposed. It is based on nonparametric extensions of commonly used scatter matrices. Two advantages result from the use of the proposed nonparametric scatter matrices. First, they are generally of full rank. This provides the ability to specify the number of extracted features desired. This is in contrast to parametric discriminant analysis, which for an L class problem typically can determine at most L 1 features. Second, the nonparametric nature of the scatter matrices allows the procedure to work well even for non-Gaussian data sets. Using the same basic framework, a procedure is proposed to test the structural similarity of two distributions. The procedure works in high-dimensional space. It specifies a linear decomposition of the original data space in which a relative indication of dissimilarity along each new basis vector is provided. The nonparametric scatter matrices are also used to derive a clustering procedure, which is recognized as a k-nearest neighbor version of the nonparametric valley seeking algorithm. The form which results provides a unified view of the parametric nearest mean reclassification algorithm and the nonparametric valley seeking algorithm.


IEEE Transactions on Information Theory | 1981

The optimal distance measure for nearest neighbor classification

Robert D. Short; Keinosuke Fukunaga

A local distance measure is shown to optimize the performance of the nearest neighbor two-class classifier for a finite number of samples. The difference between the finite sample error and the asymptotic error is used as the criterion of improvement. This new distance measure is compared to the well-known Euclidean distance. An algorithm for practical implementation is introduced. This algorithm is shown to be computationally competitive with the present nearest neighbor procedures and is illustrated experimentally. A closed form for the corresponding second-order moment of this criterion is found. Finally, the above results are extended to


IEEE Transactions on Computers | 1975

A Branch and Bound Clustering Algorithm

Warren L. G. Koontz; Patrenahalli M. Narendra; Keinosuke Fukunaga

The problem of clustering N objects into M classes may be viewed as a combinatorial optimization algorithm. In the literature on clustering, iterative hill-climbing techniques are used to find a locally optimum classification. In this paper, we develop a clustering algorithm based on the branch and bound method of combinatorial optimization. This algorithm determines the globally optimum classification and is computationally efficient


IEEE Transactions on Information Theory | 1973

Optimization of k nearest neighbor density estimates

Keinosuke Fukunaga; Larry D. Hostetler

Nonparametric density estimation using the k -nearest-neighbor approach is discussed. By developing a relation between the volume and the coverage of a region, a functional form for the optimum k in terms of the sample size, the dimensionality of the observation space, and the underlying probability distribution is obtained. Within the class of density functions that can be made circularly symmetric by a linear transformation, the optimum matrix for use in a quadratic form metric is obtained. For Gaussian densities this becomes the inverse covariance matrix that is often used without proof of optimality. The close relationship of this approach to that of Parzen estimators is then investigated.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

Bayes Error Estimation Using Parzen and k-NN Procedures

Keinosuke Fukunaga; Donald M. Hummels

The use of k nearest neighbor (k-NN) and Parzen density estimates to obtain estimates of the Bayes error is investigated under limited design set conditions. By drawing analogies between the k-NN and Parzen procedures, new procedures are suggested, and experimental results are given which indicate that these procedures yield a significant improvement over the conventional k-NN and Parzen procedures. We show that, by varying the decision threshold, many of the biases associated with the k-NN or Parzen density estimates may be compensated, and successful error estimation may be performed in spite of these biases. Experimental results are given which demonstrate the effect of kernel size and shape (Parzen), the size of k (k-NN), and the number of samples in the design set.

Collaboration


Dive into the Keinosuke Fukunaga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David R. Olsen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge