Yuri A. W. Shardt
University of Duisburg-Essen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuri A. W. Shardt.
IEEE Transactions on Industrial Electronics | 2015
Yuri A. W. Shardt; Haiyang Hao; Steven X. Ding
The development of advanced techniques for process monitoring and fault diagnosis using both model-based and data-driven approaches has led to many practical applications. One issue that has not been considered in such applications is the ability to deal with key performance indicators (KPIs) that are only sporadically measured and with significant time delay. Therefore, in this paper, the data-driven design of diagnostic-observer-based process monitoring schemes is extended to include the ability to detect changes given infrequently measured KPIs. The extended diagnostic observer is shown to be stable and hence able to converge to the true value. The proposed method is tested using both Monte Carlo simulations and the Tennessee-Eastman problem. It is shown that although time delay and sampling time increase the detection delay, the overall effect can be mitigated by using a soft sensor. Furthermore, it is shown that the results are not strongly dependent on the sampling time, but do depend on the time delay. Therefore, the proposed soft-sensor-based monitoring scheme can efficiently detect faults even in the absence of direct process information.
Journal of The Franklin Institute-engineering and Applied Mathematics | 2017
Kai Zhang; Steven X. Ding; Yuri A. W. Shardt; Zhiwen Chen; Kaixiang Peng
Abstract The pioneering multivariate statistical process monitoring (MSPM) methods use the Q-statistic as an alternative for the T2-statistic to detect faults occurring in the residual subspace spanned by the process variables, since directly using T2 for this subspace can lead to numerical problems. Such use has also spread to current work in MSPM field. However, substantial improvement of computational resource has sufficiently mitigated the numerical problem, which, thus, leads to a need to assess their detectability when using in the same position. This paper seeks to solve this historical issue by examining the two statistics in light of the fault detection rate (FDR) index to assess their performance when detecting both additive and multiplicative faults. Theoretical and simulation results show that the two statistics have different impacts on computing the FDR. Furthermore, it is shown that, the T2-statistic performs, in terms of the FDR, better at detecting most additive and multiplicative faults. Finally, based on the achieved results, a remedy to the interpretation of traditional MSPM methods are given.
Isa Transactions | 2017
Kai Zhang; Yuri A. W. Shardt; Zhiwen Chen; Xu Yang; Steven X. Ding; Kaixiang Peng
Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods.
IEEE Transactions on Industrial Electronics | 2016
Shane Dominic; Yuri A. W. Shardt; Steven X. Ding; Hao Luo
The need to deal with rapid change in an environmentally and economically friendly manner has led to renewed interest in data-driven, online process optimization. Although various methods, such as economic model predictive control (EMPC), are available to achieve this goal, they require that the process model be available and relatively accurate and that there be no process changes. Recently, the focus has shifted to using economic key performance indices (KPIs) to design supervisory controllers to regulate the process. In order to accomplish this, accurate models of the highly nonlinear KPIs are needed. A solution to this problem is to develop a two-step control strategy consisting of a static, offline component and a dynamic, online component. This paper proposes the use of a linear, BILIMOD method combined with a self-partitioning algorithm for the static component and gradient-based optimization method for the dynamic component. In order to deal with process changes, the static model parameters are updated. The proposed new controller strategy is tested on the wastewater treatment process. It is shown that the proposed method can quickly and effectively achieve the desired optimal point with minimal disturbance to the overall process.
Engineering Applications of Artificial Intelligence | 2016
Siamak Mehrkanoon; Yuri A. W. Shardt; Johan A. K. Suykens; Steven X. Ding
Although time delay is an important element in both system identification and control performance assessment, its computation remains elusive. This paper proposes the application of a least squares support vector machines driven approach to the problem of determining constant time delay for a chemical process. The approach consists of two steps, where in the first step the state of the system and its derivative are approximated based on the LS-SVM model. The second step consists of modeling the delay term and estimating the unknown model parameters as well as the time delay of the system. Therefore the proposed approach avoids integrating the given differential equation that can be computationally expensive. This time delay estimation method is applied to both simulation and experimental data obtained from a continuous, stirred, heated tank. The results show that the proposed method can provide accurate estimates even if significant noise or unmeasured additive disturbances are present.
Isa Transactions | 2017
Kai Zhang; Yuri A. W. Shardt; Zhiwen Chen; Kaixiang Peng
Using the expected detection delay (EDD) index to measure the performance of multivariate statistical process monitoring (MSPM) methods for constant additive faults have been recently developed. This paper, based on a statistical investigation of the T2- and Q-test statistics, extends the EDD index to the multiplicative and drift fault cases. As well, it is used to assess the performance of common MSPM methods that adopt these two test statistics. Based on how to use the measurement space, these methods can be divided into two groups, those which consider the complete measurement space, for example, principal component analysis-based methods, and those which only consider some subspace that reflects changes in key performance indicators, such as partial least squares-based methods. Furthermore, a generic form for them to use T2- and Q-test statistics are given. With the extended EDD index, the performance of these methods to detect drift and multiplicative faults is assessed using both numerical simulations and the Tennessee Eastman process.
Cluster Computing | 2018
Mingzhu Tang; Steven X. Ding; Chunhua Yang; Fanyong Cheng; Yuri A. W. Shardt; Wen Long; Daifei Liu
Given the importance of the class-imbalanced data and misclassified unequal costs in large wind turbine datasets, this paper proposes a cost-sensitive large margin distribution machine (CLDM) for fault detection of wind turbines. The margin mean and margin variance are use to characterize the margin distribution. The objective function and constraints of the large margin distribution machine (LDM) are modified to be cost-sensitive. The class-imbalanced data and misclassified unequal costs are solved by selecting the appropriately cost-sensitive parameters. Then CLDM is designed to train and test data from wind turbines in a wind farm. In order to verify the effectiveness of CLDM, it is compared with support vector machine (SVM), cost-sensitive SVM, and LDM. Comprehensive experiments on 7 datasets from a benchmark model of wind turbines and 5 datasets from a real wind farm show that CLDM has better sensitivity, gMean and average misclassified cost than the other methods.
Sensors | 2018
Yue Zhang; Xu Yang; Yuri A. W. Shardt; Jiarui Cui; Chaonan Tong
Advanced technology for process monitoring and fault diagnosis is widely used in complex industrial processes. An important issue that needs to be considered is the ability to monitor key performance indicators (KPIs), which often cannot be measured sufficiently quickly or accurately. This paper proposes a data-driven approach based on maximizing the coefficient of determination for probabilistic soft sensor development when data are missing. Firstly, the problem of missing data in the training sample set is solved using the expectation maximization (EM) algorithm. Then, by maximizing the coefficient of determination, a probability model between secondary variables and the KPIs is developed. Finally, a Gaussian mixture model (GMM) is used to estimate the joint probability distribution in the probabilistic soft sensor model, whose parameters are estimated using the EM algorithm. An experimental case study on the alumina concentration in the aluminum electrolysis industry is investigated to demonstrate the advantages and the performance of the proposed approach.
Journal of The Franklin Institute-engineering and Applied Mathematics | 2017
Yuri A. W. Shardt; Biao Huang
Abstract Traditionally, closed-loop system identification in the absence of external excitation has focused on determining the identifiability of plant model based on the interplay between the orders of the different polynomials present. However, due to the presence of the controller, it is possible that the system may not be globally identifiable at a given complexity, but may be locally identifiable given certain restrictions or relationships between the individual parameters present in the system. In order to obtain parameter-specific solutions to the problem, many different approaches can be taken. In this paper, the focus will be primarily on an expectation-based analysis of the Fisher information matrix to determine parameter-based constraints on closed-loop identification. Additionally, a method for determining an analytical expression for the expectation operation will be presented. The proposed approach will be illustrated using a first-order autoregressive model with exogenous input controlled by a lead–lag controller. Monte Carlo simulations are used to validate the resulting constraints.
Journal of Control Science and Engineering | 2017
Xu Yang; Jingjing Gao; Yuri A. W. Shardt; Linlin Li; Chaonan Tong
The thickness of the steel strip is an important indicator of the overall strip quality. Deviations in thickness are primarily controlled using the automatic gauge control (AGC) system of each rolling stand. At the last stand, the monitoring AGC system is usually used, where the deviations in thickness can be directly measured by the X-ray thickness gauge device and used as the input to the AGC system. However, due to the physical distance between the thickness detection device and the rolling stand, time delay is unavoidably present in the thickness control loop, which can affect control performance and lead to system oscillations. Furthermore, the parameters of the system can change due to perturbations from external disturbances. Therefore, this paper proposes an identification and control scheme for monitoring AGC system that can handle time delay and parameter uncertainty. The cross-correlation function is used to estimate the time delay of the system, while the system parameters are identified using a recursive least squares method. The time delay and parameter estimates are then further refined using the Levenberg-Marquardt algorithm, so as to provide the most accurate parameter estimates for the complete system. Simulation results show that, compared with the standard Proportion Integration Differentiation (PID) controller approach, the proposed approach is not affected by changes in the time delay and parameter uncertainties.