Richard J. Hathaway
Georgia Southern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard J. Hathaway.
IEEE Transactions on Fuzzy Systems | 1993
Richard J. Hathaway; James C. Bezdek
A family of objective functions called fuzzy c-regression models, which can be used too fit switching regression models to certain types of mixed data, is presented. Minimization of particular objective functions in the family yields simultaneous estimates for the parameters of c regression models, together with a fuzzy c-partitioning of the data. A general optimization approach for the family of objective functions is given and corresponding theoretical convergence results are discussed. The approach is illustrated by two numerical examples that show how it can be used to fit mixed data to coupled linear and nonlinear models. >
systems man and cybernetics | 1987
James C. Bezdek; Richard J. Hathaway; Michael J. Sabin; William T. Tucker
A counterexample to the original incorrect convergence theorem for the fuzzy c-means (FCM) clustering algorithms (see J.C. Bezdak, IEEE Trans. Pattern Anal. and Math. Intell., vol.PAMI-2, no.1, pp.1-8, 1980) is provided. This counterexample establishes the existence of saddle points of the FCM objective function at locations other than the geometric centroid of fuzzy c-partition space. Counterexamples previously discussed by W.T. Tucker (1987) are summarized. The correct theorem is stated without proof: every FCM iterate sequence converges, at least along a subsequence, to either a local minimum or saddle point of the FCM objective function. Although Tuckers counterexamples and the corrected theory appear elsewhere, they are restated as a caution not to further propagate the original incorrect convergence statement.
systems man and cybernetics | 2001
Richard J. Hathaway; James C. Bezdek
The problem of clustering a real s-dimensional data set X={x(1 ),,,,,x(n)} subset R(s) is considered. Usually, each observation (or datum) consists of numerical values for all s features (such as height, length, etc.), but sometimes data sets can contain vectors that are missing one or more of the feature values. For example, a particular datum x(k) might be incomplete, having the form x(k)=(254.3, ?, 333.2, 47.45, ?)(T), where the second and fifth feature values are missing. The fuzzy c-means (FCM) algorithm is a useful tool for clustering real s-dimensional data, but it is not directly applicable to the case of incomplete data. Four strategies for doing FCM clustering of incomplete data sets are given, three of which involve modified versions of the FCM algorithm. Numerical convergence properties of the new algorithms are discussed, and all approaches are tested using real and artificially generated incomplete data sets.
Pattern Recognition | 1994
Richard J. Hathaway; James C. Bezdek
Abstract The relational fuzzy c-means (RFCM) algorithm can be used to cluster a set of n objects described by pair-wise dissimilarity values if (and only if) there exist n points in R n − 1 whose squared Euclidean distances precisely match the given dissimilarity data. This strong restriction on the dissimilarity data renders RFCM inapplicable to most relational clustering problems. This paper substantially improves RFCM by generalizing it to the case of arbitrary (symmetric) dissimilarity data. The generalization is obtained using a computationally efficient modification of the existing algorithm that is equivalent to applying a “spreading” transformation to the dissimilarity data. While the method given applies specifically to dissimilarity data, a simple transformation can be used to convert similarity relations into dissimilarity data, so the method is applicable to any numerical relational data that are positive, reflexive (or anti-reflexive) and symmetric. Numerical examples illustrate and compare the present approach to problems that can be studied with alternatives such as the linkage algorithms.
IEEE Transactions on Fuzzy Systems | 2000
Richard J. Hathaway; James C. Bezdek; Yingkang Hu
Fuzzy c-means (FCM) is a useful clustering technique. Modifications of FCM using L/sub 1/ norm distances increase robustness to outliers. Object and relational data versions of FCM clustering are defined for the more general case where the L/sub p/ norm (p/spl ges/1) or semi-norm (0 0 in order to facilitate the empirical examination of the object data models. Both object and relational approaches are included in a numerical study.
Pattern Recognition | 1989
Richard J. Hathaway; John W. Davenport; James C. Bezdek
Abstract The hard and fuzzy c-means algorithms are widely used, effective tools for the problem of clustering n objects into (hard or fuzzy) groups of similar individuals when the data is available as object data, consisting of a set of n feature vectors in RP. However, object data algorithms are not directly applicable when the n objects are implicitly described in terms of relational data, which consists of a set of n2 measurements of relations between each of the pairs of objects. New relational versions of the hard and fuzzy c-means algorithms are presented here for the case when the relational data can reasonably be viewed as some measure of distance. Some convergence properties of the algorithms are given along with a numerical example.
IEEE Transactions on Fuzzy Systems | 1995
Richard J. Hathaway; James C. Bezdek
Various hard, fuzzy and possibilistic clustering criteria (objective functions) are useful as bases for a variety of pattern recognition problems. At present, many of these criteria have customized individual optimization algorithms. Because of the specialized nature of these algorithms, experimentation with new and existing criteria can be very inconvenient and costly in terms of development and implementation time. This paper shows how to reformulate some clustering criteria so that specialized algorithms can be replaced by general optimization routines found in commercially available software. We prove that the original and reformulated versions of each criterion are fully equivalent. Finally, two numerical examples are given to illustrate reformulation. >
soft computing | 2002
James C. Bezdek; Richard J. Hathaway
Let f : Rs ? R be a real-valued scalar field, and let x = (x1,..., xs)T ? Rs be partitioned into t subsets of non-overlapping variables as x = (X1,...,Xt)T, with Xi ? Rpi, for i = 1, ..., t, ?i=1tPi = s. Alternating optimization (AO) is an iterative procedure for minimizing (or maximizing) the function f(x) = f(X1,X2,...,Xt) jointly over all variables by alternating restricted minimizations over the individual subsets of variables X1,...,Xt. AO is the basis for the c-means clustering algorithms (t=2), many forms of vector quantization (t = 2, 3 and 4), and the expectation-maximization (EM) algorithm (t = 4) for normal mixture decomposition. First we review where and how AO fits into the overall optimization landscape. Then we discuss the important theoretical issues connected with the AO approach. Finally, we state (without proofs) two new theorems that give very general local and global convergence and rate of convergence results which hold for all partitionings of x.
IEEE Transactions on Fuzzy Systems | 1996
Richard J. Hathaway; James C. Bezdek; Witold Pedrycz
Presented is a model that integrates three data types (numbers, intervals, and linguistic assessments). Data of these three types come from a variety of sensors. One objective of sensor-fusion models is to provide a common framework for data integration, processing, and interpretation. That is what our model does. We use a small set of artificial data to illustrate how problems as diverse as feature analysis, clustering, cluster validity, and prototype classifier design can all be formulated and attacked with standard methods once the data are converted to the generalized coordinates of our model. The effects of reparameterization on computational outputs are discussed. Numerical examples illustrate that the proposed model affords a natural way to approach problems which involve mixed data types.
Journal of Classification | 1988
Richard J. Hathaway; James C. Bezdek
One of the main techniques embodied in many pattern recognition systems is cluster analysis — the identification of substructure in unlabeled data sets. The fuzzy c-means algorithms (FCM) have often been used to solve certain types of clustering problems. During the last two years several new local results concerning both numerical and stochastic convergence of FCM have been found. Numerical results describe how the algorithms behave when evaluated as optimization algorithms for finding minima of the corresponding family of fuzzy c-means functionals. Stochastic properties refer to the accuracy of minima of FCM functionals as approximations to parameters of statistical populations which are sometimes assumed to be associated with the data. The purpose of this paper is to collect the main global and local, numerical and stochastic, convergence results for FCM in a brief and unified way.