Arthur K. Kordon
Dow Chemical Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arthur K. Kordon.
Computers & Chemical Engineering | 2004
Leo H. Chiang; Mark Kotanchek; Arthur K. Kordon
Abstract The proficiencies of Fisher discriminant analysis (FDA), support vector machines (SVM), and proximal support vector machines (PSVM) for fault diagnosis (i.e. classification of multiple fault classes) are investigated. The Tennessee Eastman process (TEP) simulator was used to generate overlapping datasets to evaluate the classification performance. When all variables were used, the datasets were masked with irrelevant information, which resulted in poor classification. With key variables selected by genetic algorithms and the contribution charts, SVM and PSVM outperformed FDA and demonstrated the advantage of using nonlinear technique when data are overlapped. The overall misclassification for the testing data using FDA dropped from 38 to 18%; while those using SVM and PSVM dropped from 44–45 to 6%. The effectiveness of the proposed approach is increased in PSVM by saving significant computation time and memory requirement, while obtaining comparable classification results. For auto-correlated data, the incorporation of time lags into SVM and PSVM improved classification results. The added dimensions decreased the degree to which the data overlap and the overall misclassification for the testing set using SVM and PSVM decreased further to 3%.
Archive | 2006
Guido Smits; Arthur K. Kordon; Katherine Vladislavleva; Elsa M. Jordaan; Mark Kotanchek
This chapter gives an overview, based on the experience from the Dow Chemical Company, of the importance of variable selection to build robust models from industrial datasets. A quick review of variable selection schemes based on linear techniques is given. A relatively simple fitness inheritance scheme is proposed to do nonlinear sensitivity analysis that is especially effective when combined with Pareto GP. The method is applied to two industrial datasets with good results.
systems man and cybernetics | 2010
Plamen Angelov; Arthur K. Kordon
A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the challenges of the modern advanced process industry.
Archive | 2006
Arthur K. Kordon; Flor A. Castillo; Guido Smits; Mark Kotanchek
This chapter gives a systematic view, based on the experience from The Dow Chemical Company, of the key issues for applying symbolic regression with Genetic Programming (GP) in industrial problems. The competitive advantages of GP are defined and several industrial problems appropriate for GP are recommended and referenced with specific applications in the chemical industry. A systematic method for selecting the key GP parameters, based on statistical design of experiments, is proposed. The most significant technical and non-technical issues for delivering a successful GP industrial application are discussed briefly.
Archive | 2003
Mark Kotanchek; Guido Smits; Arthur K. Kordon
Since the mid-1990’s, symbolic regression via genetic programming (GP) has become a core component of a multi-disciplinary approach to empirical modeling at Dow Chemical. Herein we review the role of symbolic regression within an integrated empirical modeling methodology, discuss symbolic regression system design issues, best practices and lessons learned from industrial application, and present future directions for research and application
congress on evolutionary computation | 2002
Arthur K. Kordon; Guido Smits; Elsa M. Jordaan; Ed Rightor
A novel approach for development of inferential sensors based on integration of three key computational intelligence approaches (genetic programming, analytical neural networks, and support vector machines) is proposed. The advantages of this type of soft sensors are their good generalization capabilities, increased robustness, explicit input/output relationships, self-assessment capabilities, and low implementation and maintenance cost.
Archive | 2005
Flor A. Castillo; Arthur K. Kordon; Jeff Sweeney; Wayne Zirk
The chapter summarizes the practical experience of integrating genetic programming and statistical modeling at The Dow Chemical Company. A unique methodology for using Genetic Programming in statistical modeling of designed and undesigned data is described and illustrated with successful industrial applications. As a result of the synergistic efforts, the building technique has been improved and the model development cost and time can be significantly reduced. In case of designed data Genetic Programming reduced costs by suggesting transformations as an alternative to doing additional experimentation. In case of undesigned data Genetic Programming was instrumental in reducing the model building costs by providing alternative models for consideration.
Archive | 2010
Arthur K. Kordon
The flow of academic ideas in the area of computational intelligence is impacting industrial practice at considerable speed. Practitioners face the challenge of tracking, understanding and applying the latest techniques, which often prove their value even before the underlying theories are fully understood. This book offers realistic guidelines on creating value from the application of computational intelligence methods. In Part I, the author offers simple explanations of the key computational intelligence technologies: fuzzy logic, neural networks, support vector machines, evolutionary computation, swarm intelligence, and intelligent agents. In Part II, he defines the typical business environment and analyzes the competitive advantages these techniques offer. In Part III, he introduces a methodology for effective real-world application of computational intelligence while minimizing development cost, and he outlines the critical, underestimated technology marketing efforts required. The methodology can improve the existing capabilities of Six Sigma, one of the most popular work processes in industry. Finally, in Part IV the author looks to technologies still in the research domain, such as perception-based computing, artificial immune systems, and systems with evolved structure, and he examines the future for computational intelligence applications while taking into account projected industrial needs. The author adopts a light tone in the book, visualizes many of the techniques and ideas, and supports the text with notes from successful implementations. The book is ideal for engineers implementing these techniques in the real world, managers charged with creating value and reducing costs in the related industries, and scientists in computational intelligence looking towards the application of their research.
genetic and evolutionary computation conference | 2006
Flor A. Castillo; Arthur K. Kordon; Guido Smits; Ben Christenson; Dee Dickerson
Symbolic regression based on Pareto Front GP is the key approach for generating high-performance parsimonious empirical models acceptable for industrial applications. The paper addresses the issue of finding the optimal parameter settings of Pareto Front GP which direct the simulated evolution toward simple models with acceptable prediction error. A generic methodology based on statistical design of experiments is proposed. It includes statistical determination of the number of replicates by half-width confidence intervals, determination of the significant inputs by fractional factorial design of experiments, approaching the optimum by steepest ascent/descent, and local exploration around the optimum by Box Behnken or by central composite design of experiments. The results from implementing the proposed methodology to a small-sized industrial data set show that the statistically significant factors for symbolic regression, based on Pareto Front GP, are the number of cascades, the number of generations, and the population size. A second order regression model with high R2 of 0.97 includes the three parameters and their optimal values have been defined. The optimal parameter settings were validated with a separate small sized industrial data set. The optimal settings are recommended for symbolic regression applications using data sets with up to 5 inputs and up to 50 data points.
genetic and evolutionary computation conference | 2004
Arthur K. Kordon; Elsa M. Jordaan; Lawrence Chew; Guido Smits; Torben R. Bruck; Keith L. Haney; Annika Jenings
A successful industrial application of a novel type biomass estimator based on Genetic Programming (GP) is described in the paper. The biomass is inferred from other available measurements via an ensemble of nonlinear functions, generated by GP. The models are selected on the Pareto front of performance-complexity plane. The advantages of the proposed inferential sensor are: direct implementation into almost any process control system, rudimentary self-assessment capabilities, better robustness toward batch variations, and more effective maintenance. The biomass inferential sensor has been applied in high cell density microbial fermentations at The Dow Chemical Company.