Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolaos B. Karayiannis is active.

Publication


Featured researches published by Nicolaos B. Karayiannis.


IEEE Transactions on Neural Networks | 1997

Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques

Nicolaos B. Karayiannis; Glenn Weiqun Mi

This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.


IEEE Transactions on Medical Imaging | 1998

Detection of microcalcifications in digital mammograms using wavelets

Ted C. Wang; Nicolaos B. Karayiannis

This paper presents an approach for detecting micro-calcifications in digital mammograms employing wavelet-based subband image decomposition. The microcalcifications appear in small clusters of few pixels with relatively high intensity compared with their neighboring pixels. These image features can be preserved by a detection system that employs a suitable image transform which can localize the signal characteristics in the original and the transform domain. Given that the microcalcifications correspond to high-frequency components of the image spectrum, detection of microcalcifications is achieved by decomposing the mammograms into different frequency subbands, suppressing the low-frequency subband, and, finally, reconstructing the mammogram from the subbands containing only high frequencies. Preliminary experiments indicate that further studies are needed to investigate the potential of wavelet-based subband image decomposition as a tool for detecting microcalcifications in digital mammograms.


IEEE Transactions on Neural Networks | 1999

Reformulated radial basis neural networks trained by gradient descent

Nicolaos B. Karayiannis

This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBFs. The form of the RBFs is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBFs. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBFs with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms.


IEEE Transactions on Fuzzy Systems | 1997

An integrated approach to fuzzy learning vector quantization and fuzzy c-means clustering

Nicolaos B. Karayiannis; James C. Bezdek

Derives an interpretation for a family of competitive learning algorithms and investigates their relationship to fuzzy c-means and fuzzy learning vector quantization. These algorithms map a set of feature vectors into a set of prototypes associated with a competitive network that performs unsupervised learning. Derivation of the new algorithms is accomplished by minimizing an average generalized distance between the feature vectors and prototypes using gradient descent. A close relationship between the resulting algorithms and fuzzy c-means is revealed by investigating the functionals involved. It is also shown that the fuzzy c-means and fuzzy learning vector quantization algorithms are related to the proposed algorithms if the learning rate at each iteration is selected to satisfy a certain condition.


international conference on artificial neural networks | 1992

Fast learning algorithms for neural networks

Nicolaos B. Karayiannis; Anastasios N. Venetsanopoulos

A generalized criterion for the training of feedforward neural networks is proposed. Depending on the optimization strategy used, this criterion leads to a variety of fast learning algorithms for single-layered as well as multilayered neural networks. The simplest algorithm devised on the basis of this generalized criterion is the fast delta rule algorithm, proposed for the training of single-layered neural networks. The application of a similar optimization strategy to multilayered neural networks in conjunction with the proposed generalized criterion provides the fast backpropagation algorithm. Another set of fast algorithms with better convergence properties is derived on the basis of the same strategy that provided recently a family of Efficient LEarning Algorithms for Neural NEtworks (ELEANNE). Several experiments verify that the fast algorithms developed perform the training of neural networks faster than the corresponding learning algorithms existing in the literature. >


IEEE Transactions on Image Processing | 1995

Fuzzy vector quantization algorithms and their application in image compression

Nicolaos B. Karayiannis; Pin-I Pai

This paper presents the development and evaluation of fuzzy vector quantization algorithms. These algorithms are designed to achieve the quality of vector quantizers provided by sophisticated but computationally demanding approaches, while capturing the advantages of the frequently used in practice k-means algorithm, such as speed, simplicity, and conceptual appeal. The uncertainty typically associated with clustering tasks is formulated in this approach by allowing the assignment of each training vector to multiple clusters in the early stages of the iterative codebook design process. A training vector assignment strategy is also proposed for the transition from the fuzzy mode, where each training vector can be assigned to multiple clusters, to the crisp mode, where each training vector can be assigned to only one cluster. Such a strategy reduces the dependence of the resulting codebook on the random initial codebook selection. The resulting algorithms are used in image compression based on vector quantization. This application provides the basis for evaluating the computational efficiency of the proposed algorithms and comparing the quality of the resulting codebook design with that provided by competing techniques.


IEEE Transactions on Neural Networks | 2003

On the construction and training of reformulated radial basis function neural networks

Nicolaos B. Karayiannis; Mary M. Randolph-Gips

Presents a systematic approach for constructing reformulated radial basis function (RBF) neural networks, which was developed to facilitate their training by supervised learning algorithms based on gradient descent. This approach reduces the construction of radial basis function models to the selection of admissible generator functions. The selection of generator functions relies on the concept of the blind spot, which is introduced in the paper. The paper also introduces a new family of reformulated radial basis function neural networks, which are referred to as cosine radial basis functions. Cosine radial basis functions are constructed by linear generator functions of a special form and their use as similarity measures in radial basis function models is justified by their geometric interpretation. A set of experiments on a variety of datasets indicate that cosine radial basis functions outperform considerably conventional radial basis function neural networks with Gaussian radial basis functions. Cosine radial basis functions are also strong competitors to existing reformulated radial basis function models trained by gradient descent and feedforward neural networks with sigmoid hidden units.


IEEE Transactions on Neural Networks | 1997

Quantum neural networks (QNNs): inherently fuzzy feedforward neural networks

Gopathy Purushothaman; Nicolaos B. Karayiannis

This paper introduces quantum neural networks (QNNs), a class of feedforward neural networks (FFNNs) inherently capable of estimating the structure of a feature space in the form of fuzzy sets. The hidden units of these networks develop quantized representations of the sample information provided by the training data set in various graded levels of certainty. Unlike other approaches attempting to merge fuzzy logic and neural networks, QNNs can be used in pattern classification problems without any restricting assumptions such as the availability of a priori knowledge or desired membership profile, convexity of classes, a limited number of classes, etc. Experimental results presented here show that QNNs are capable of recognizing structures in data, a property that conventional FFNNs with sigmoidal hidden units lack.


IEEE Transactions on Neural Networks | 1996

Fuzzy algorithms for learning vector quantization

Nicolaos B. Karayiannis; Pin-I Pai

This paper presents the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms are derived by minimizing the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of a competitive learning vector quantization (LVQ) network, which represent the prototypes. This formulation leads to competitive algorithms, which allow each input vector to attract all prototypes. The strength of attraction between each input and the prototypes is determined by a set of membership functions, which can be selected on the basis of specific criteria. A gradient-descent-based learning rule is derived for a general class of admissible membership functions which satisfy certain properties. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible membership functions with different properties. The proposed algorithms are tested and evaluated using the IRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization.


IEEE Transactions on Neural Networks | 2000

Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators

Nicolaos B. Karayiannis

This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

Collaboration


Dive into the Nicolaos B. Karayiannis's collaboration.

Top Co-Authors

Avatar

Eli M. Mizrahi

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

James D. Frost

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Merrill S. Wise

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pin-I Pai

University of Houston

View shared research outputs
Top Co-Authors

Avatar

Mary M. Randolph-Gips

University of Houston–Clear Lake

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge