Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mingyu Zhong is active.

Publication


Featured researches published by Mingyu Zhong.


IEEE Transactions on Neural Networks | 2010

An Adaptive Multiobjective Approach to Evolving ART Architectures

Assem Kaylani; Michael Georgiopoulos; Mansooreh Mollaghasemi; Georgios C. Anagnostopoulos; Christopher Sentelle; Mingyu Zhong

In this paper, we present the evolution of adaptive resonance theory (ART) neural network architectures (classifiers) using a multiobjective optimization approach. In particular, we propose the use of a multiobjective evolutionary approach to simultaneously evolve the weights and the topology of three well-known ART architectures; fuzzy ARTMAP (FAM), ellipsoidal ARTMAP (EAM), and Gaussian ARTMAP (GAM). We refer to the resulting architectures as MO-GFAM, MO-GEAM, and MO-GGAM, and collectively as MO-GART. The major advantage of MO-GART is that it produces a number of solutions for the classification problem at hand that have different levels of merit [accuracy on unseen data (generalization) and size (number of categories created)]. MO-GART is shown to be more elegant (does not require user intervention to define the network parameters), more effective (of better accuracy and smaller size), and more efficient (faster to produce the solution networks) than other ART neural network architectures that have appeared in the literature. Furthermore, MO-GART is shown to be competitive with other popular classifiers, such as classification and regression tree (CART) and support vector machines (SVMs).


international joint conference on neural network | 2006

Gap-Based Estimation: Choosing the Smoothing Parameters for Probabilistic and General Regression Neural Networks

Mingyu Zhong; Dave Coggeshall; Ehsan Ghaneie; Thomas Pope; Mark Rivera; Michael Georgiopoulos; Georgios C. Anagnostopoulos; Mansooreh Mollaghasemi; S.M. Richie

Probabilistic neural networks (PNN) and general regression neural networks (GRNN) represent knowledge by simple but interpretable models that approximate the optimal classifier or predictor in the sense of expected value of the accuracy. These models require the specification of an important smoothing parameter, which is usually chosen by cross-validation or clustering. In this article, we demonstrate the problems with the cross-validation and clustering approaches to specify the smoothing parameter, discuss the relationship between this parameter and some of the data statistics, and attempt to develop a fast approach to determine the optimal value of this parameter. Finally, through experimentation, we show that our approach, referred to as a gap-based estimation approach, is superior in speed to the compared approaches, including support vector machine, and yields good and stable accuracy.


Machine Learning | 2008

A k-norm pruning algorithm for decision tree classifiers based on error rate estimation

Mingyu Zhong; Michael Georgiopoulos; Georgios C. Anagnostopoulos

Abstract Decision trees are well-known and established models for classification and regression. In this paper, we focus on the estimation and the minimization of the misclassification rate of decision tree classifiers. We apply Lidstone’s Law of Succession for the estimation of the class probabilities and error rates. In our work, we take into account not only the expected values of the error rate, which has been the norm in existing research, but also the corresponding reliability (measured by standard deviations) of the error rate. Based on this estimation, we propose an efficient pruning algorithm, called k-norm pruning, that has a clear theoretical interpretation, is easily implemented, and does not require a validation set. Our experiments show that our proposed pruning algorithm produces accurate trees quickly, and compares very favorably with two other well-known pruning algorithms, CCP of CART and EBP of C4.5.


Neural Networks | 2007

Experiments with Safe µARTMAP: Effect of the network parameters on the network performance

Mingyu Zhong; Bryan Rosander; Michael Georgiopoulos; Georgios C. Anagnostopoulos; Mansooreh Mollaghasemi; S.M. Richie

Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is, Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data are of the noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this category proliferation. One of these modified Fuzzy ARTMAP architectures was the one proposed by Gomez-Sanchez, and his colleagues, referred to as Safe muARTMAP. In this paper we present reasonable analytical arguments that demonstrate of how we should choose the range of some of the Safe muARTMAP network parameters. Through a combination of these analytical arguments and experimentation we were able to identify good default parameter values for some of the Safe muARTMAP network parameters. This feat would allow one to save computations when a good performing Safe muARTMAP network is needed to be identified for a new classification problem. Furthermore, we performed an exhaustive experimentation to find the best Safe muARTMAP network for a variety of problems (simulated and real problems), and we compared it with other best performing ART networks, including other ART networks that claim to resolve the category proliferation problem in Fuzzy ARTMAP. These experimental results allow one to make appropriate statements regarding the pair-wise comparison of a number of ART networks (including Safe muARTMAP).


international joint conference on neural network | 2006

Experiments with Safe ARTMAP and Comparisons to Other ART Networks

Mingyu Zhong; Bryan Rosander; Michael Georgiopoulos; Georgios C. Anagnostopoulos; Mansooreh Mollaghasemi; S.M. Richie

Fuzzy ARTMAP (FAM) is currently considered as one of the premier neural network architectures in solving classification problems. Safe muARTMAP, a modified version of FAM, was introduced to remedy the category proliferation problem that has been extensively reported in the literature. However, Safe muARTMAPs performance depends on a number of parameters. In this paper, we analyzed each parameter to set up the candidate values for evaluation. We performed an exhaustive experimentation to identify good default values for these parameters for a variety of problems, and compared the best performing Safe muARTMAP network with other best performing ART networks, including those that claim to solve the category proliferation problem.


international conference on pattern recognition | 2008

Properties of the k-norm pruning algorithm for decision tree classifiers

Mingyu Zhong; Michael Georgiopoulos; Georgios C. Anagnostopoulos

Pruning is one of the key procedures in training decision tree classifiers. It removes trivial rules from the raw knowledge base built from training examples, in order to avoid over-using noisy, conflicting, or fuzzy inputs, so that the refined model can generalize better with unseen cases. In this paper, we present a number of properties of k-norm pruning, a recently proposed pruning algorithm, which has clear theoretical interpretation. In an earlier paper it was shown that k-norm pruning compares very favorably in terms of accuracy and size with minimal cost-complexity pruning and error based pruning, two of the most cited decision tree pruning methods; it was also shown that k-norm pruning is more efficient, at times orders of magnitude more efficient than minimal cost-complexity pruning and error based pruning. In this paper, we demonstrate the validity of the k-norm properties through a series of theorems, and explain their practical significance.


the florida ai research society | 2008

A Backward Adjusting Strategy and Optimization of the C4.5 Parameters to Improve C4.5's Performance

Jason R. Beck; Maria Garcia; Mingyu Zhong; Michael Georgiopoulos; Georgios C. Anagnostopoulos


Archive | 2007

A Backward Adjusting Strategy for the C4.5 Decision Tree Classifier

Jason R. Beck; Maria Garcia; Mingyu Zhong; Michael Georgiopoulos; Georgios C. Anagnostopoulos


Archive | 2007

An analysis of misclassification rates for decision trees

Michael Georgiopoulos; Georgios C. Anagnostopoulos; Mingyu Zhong


conference on artificial intelligence for applications | 2007

Experiments with an innovative tree pruning algorithm

Mingyu Zhong; Michael Georgiopoulos; Georgios C. Anagnostopoulos

Collaboration


Dive into the Mingyu Zhong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Georgiopoulos

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S.M. Richie

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Assem Kaylani

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Bryan Rosander

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Christopher Sentelle

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Dave Coggeshall

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ehsan Ghaneie

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Mark Rivera

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge