Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dezhao Chen is active.

Publication


Featured researches published by Dezhao Chen.


Computers & Chemical Engineering | 2005

Iterative ant-colony algorithm and its application to dynamic optimization of chemical process

Bing Zhang; Dezhao Chen; Weixiang Zhao

For solving dynamic optimization problems of chemical process with numerical methods, a novel algorithm named iterative ant-colony algorithm (IACA), the main idea of which was to iteratively execute ant-colony algorithm and gradually approximate the optimal control profile, was developed in this paper. The first step of IACA was to discretize time interval and control region to make the continuous dynamic optimization problem be a discrete problem. Ant-colony algorithm was then used to seek the best control profile of the discrete dynamic system. At last, the iteration based on region reduction strategy was employed to get more accurate results and enhance robustness of this algorithm. Iterative ant-colony algorithm is easy to implement. The results of the case studies demonstrated the feasibility and robustness of this novel method. IACA approach can be regarded as a reliable and useful optimization tool when gradient is not available.


Computers & Chemical Engineering | 2000

Optimizing operating conditions based on ANN and modified GAs

Weixiang Zhao; Dezhao Chen; Shangxu Hu

Abstract In this paper, an effective method based on artificial neural networks (ANN) and genetic algorithms (GAs) was suggested for modeling the process with unknown or complex mechanisms and optimizing its operating conditions consecutively. Furthermore a modified GA (MGA) with adaptive mutation range was proposed to search the optimal location more quickly. The satisfactory results of this investigation demonstrated the feasibility and effectiveness of the suggested method, and particularly showed that MGA could find the optimal values more quickly than the conventional GAs.


Chemometrics and Intelligent Laboratory Systems | 2002

An improved differential evolution algorithm in training and encoding prior knowledge into feedforward networks with application in chemistry

Chong-wei Chen; Dezhao Chen; Guang-zhi Cao

Abstract Prior-knowledge-based feedforward networks have shown superior performance in modeling chemical processes. In this paper, an improved differential evolution (IDEP) algorithm is proposed to encode prior knowledge simultaneously into networks in training process. With regard to monotonic prior knowledge, IDEP algorithm employs a flip operation to adjust those prior-knowledge-violating networks to conform to the monotonicity. In addition, two strategies, Levenberg–Marquardt descent (LMD) strategy and random perturbation (RP) strategy, are adopted to speed up the differential evolution (DE) in the algorithm and prevent it from being trapped by some local minimums, respectively. To demonstrate the IDEP algorithms efficiency, we apply it to model two chemical curves with the increasing monotonicity constraint. For comparison, four network-training algorithms without prior-knowledge constraints, as well as three existing prior-knowledge-based algorithms (which have some relationship and similarities with IDEP algorithm), are employed to solve the same problems. The simulation results show that IDEPs performance is better than all other algorithms. As a conclusion, IDEP algorithm and its promising prospective will be discussed in detail at the end of this paper.


Computers & Chemical Engineering | 2004

Detection of outlier and a robust BP algorithm against outlier

Weixiang Zhao; Dezhao Chen; Shangxu Hu

Abstract In this paper, an outlier detection method based on radial basis functions-partial least squares (RBF–PLS) approach and the Prescott test is proposed to detect outliers in complex systems. Furthermore, a robust training algorithm, weighted error back-propagation (WBP) algorithm, is also proposed to keep the training of multi-layer forward networks (MLFN) from the disturbance of outliers, if they should be retained in the data set. The experiment results fully demonstrate their satisfactory abilities on dealing with outliers and ensuring the success of modeling a complex system without clear mechanisms.


Computational Biology and Chemistry | 2001

SOM integrated with CCA for the feature map and classification of complex chemical patterns.

Xuefeng Yan; Dezhao Chen; Yaqiu Chen; Shangxu Hu

Considering that the two-dimensional (2D) feature map of the high-dimensional chemical patterns can more concisely and efficiently represent the pattern characteristic, a new procedure integrating self-organizing map (SOM) networks with correlative component analysis (CCA) is proposed. Firstly, CCA was used to identify the most important classification characteristics (CCs) from the original high-dimensional chemical pattern information. Then, the SOM maps the first several CCs, which include the most useful information for pattern classification, onto a 2D plane, on which the pattern classification feature is concisely represented. To improve the learning efficiency of SOM networks, two new algorithms for dynamically adjusting the learning rate and the range of neighborhood around the winning unit were further worked out. Besides, a convenient method for detecting the topologic nature of SOM results was proposed. Finally, a typical example of mapping two classes natural spearmint essence was employed to verify the effectiveness of the new approach. The feature-topology-preserving (FTP) map obtained can well represent the classification of original patterns and is much better than what obtained by SOM alone.


Chemometrics and Intelligent Laboratory Systems | 1996

Correlative components analysis for pattern classification

Dezhao Chen; Yaqiu Chen; Shangxu Hu

Abstract Based upon a novel method named correlative components analysis, a simple but efficient pattern classification technique is proposed in this paper. Using this method, the relatively important components of high-dimensional pattern can be successfully identified, the original problem will be mapped onto a lower dimensional space, and therefore the complexity of a high-dimensional pattern classification problem will be substantially reduced. For comparing with the methods of sequential discriminant analysis and the principal component analysis, an example of classifying complex chemical information was used, and the results verified that the new method is reasonable in principle and successful in practice.


Computational Biology and Chemistry | 2001

Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil

Chong-wei Chen; Dezhao Chen

Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerdings penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.


Computational Biology and Chemistry | 1997

A pattern classification procedure integrating the multivariate statistical analysis with neural networks.

Dezhao Chen; Yaqiu Chen; Shangxu Hu

A new procedure integrating multivariate statistical analysis with artificial neural networks (ANN) for complex pattern classification is proposed. Firstly, a specially designed statistical analysis algorithm called correlative component analysis (CCA) was used to identify the classification characteristics (CC) from original high-dimensional pattern information. These CC were then used as input data to the ANN for pattern classification. The proposed new procedure not only effectively decreased the dimensionality of original patterns, but also took advantage of the self-learning power of the ANN. Further, a typical example of classifying natural spearmint essence was employed to verify the effectiveness of the new pattern classification method. The study showed that this novel integrated procedure provides better results than those obtained using individual methods separately.


Chemometrics and Intelligent Laboratory Systems | 1999

Quantitative structure–activity relationships study of herbicides using neural networks and different statistical methods

Yaqiu Chen; Dezhao Chen; Chunyan He; Shangxu Hu

Abstract A series of herbicidal materials, N-phenylacetamides (NPAs), has been studied for their Quantitative Structure–Activity Relationships (QSAR). The molecular structure as well as the activity data were taken from literature [O. Kirino, C. Takayama, A. Mine, Quantitative structure relationships of herbicidal N-(1-methyl-1-phenylethyi) phenylacetamides, Journal Pesticide Science 11 (1986) 611–617]. The independent variables used to describe the structure of compounds consisted of seven physicochemical properties, including the mode of molecular connection, steric factor, hydrophobic parameter, etc. Fifty different compounds constitute a sample set which is divided into two groups, 47 of them form a training set and the remaining three a checking set. Through a systematic study by using the classic multivariate analysis such as the Multiple Linear Regression (MLR), the Principal Component Analysis (PCA), and the Partial Least Squares (PLS) Regression, several QSAR models were established. For finding a better way to depict the nonlinear nature of the problem, multi-layered feed-forward (MLF) neural networks (NNs) was employed. The results indicated that the conventional multivariate analysis gave larger prediction errors, while the NNs method showed better accuracy in both self-checking and prediction-checking. The error variance of predictions made by NNs was the smallest among the all methods tested, only around half of the others.


Archive | 2007

Integration Method of Ant Colony Algorithm and Rough Set Theory for Simultaneous Real Value Attribute Discretization and Attribute Reduction

Yi-Jun He; Dezhao Chen; Weixiang Zhao

Discretization of real value attributes (features) is an important pre-processing task in data mining, particularly for classification problems, and it has received significant attentions in machine learning community (Chmielewski & Grzymala-Busse, 1994; Dougherty et al., 1995; Nguyen & Skowron, 1995; Nguyen, 1998; Liu et al., 2002). Various studies have shown that discretization methods have the potential to reduce the amount of data while retaining or even improving predictive accuracy. Moreover, as reported in a study (Dougherty et al., 1995), discretization makes learning faster. However, most of the typical discretization methods can be considered as univariate discretization methods, which may fail to capture the correlation of attributes and result in degradation of the performance of a classification model. As reported (Liu et al., 2002), numerous discretization methods available in the literatures can be categorized in several dimensions: dynamic vs. static, local vs. global, splitting vs. merging, direct vs. incremental, and supervised vs. unsupervised. A hierarchical framework was also given to categorize the existing methods and pave the way for further development. A lot of work has been done, but still many issues remain unsolved, and new methods are needed (Liu et al. 2002). Since there are lots of discretization methods available, how does one evaluate discretization effects of various methods? In this study, we will focus on simplicity based criteria while preserving consistency, where simplicity is evaluated by the number of cuts. The fewer the number of cuts obtained by a discretization method, the better the effect of that method. Hence, real value attributes discretization can be defined as a problem of searching a global minimal set of cuts on attribute domains while preserving consistency, which has been shown as NP-hard problems (Nguyen, 1998). Rough set theory (Pawlak, 1982) has been considered as an effective mathematical tool for dealing with uncertain, imprecise and incomplete information and has been successfully applied in such fields as knowledge discovery, decision support, pattern classification, etc. However, rough set theory is just suitable to deal with discrete attributes, and it needs discretization as a pre-processing step for dealing with real value attributes. Moreover, attribute reduction is another key problem in rough set theory, and finding a minimal

Collaboration


Dive into the Dezhao Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weixiang Zhao

University of California

View shared research outputs
Top Co-Authors

Avatar

Weixiang Zhao

University of California

View shared research outputs
Top Co-Authors

Avatar

Yi-Jun He

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Zhang

East China University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge