Tiago J. Rato
University of Coimbra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tiago J. Rato.
Computer-aided chemical engineering | 2011
Tiago J. Rato; Marco S. Reis
Current industrial processes are characterized by encompassing a large number of interdependent variables, which very often exhibit autocorrelated behavior, due to the dynamic nature of the phenomena involved, associated with the high sampling rates of modern data acquisition systems. Multivariate statistical process control charts have been developed to handle the cross-correlation issue, such as the Hotellings T2, MEWMA and MCUSUM control charts, but they are not able to handle properly the presence of autocorrelation in data. In order to address both problems simultaneously, alternative procedures were developed, namely by adapting the control limits, using residuals from time series modeling and applying data transformation techniques, some of which will be addressed in this paper, along with others we now propose. The proposed monitoring methods use a combination of Dynamic PCA (DPCA), ARMA models and missing data estimation methods, allowing for the simultaneous reduction of data dimensionality while capturing its dynamic behavior, therefore also handling the autocorrelation effects. The results obtained show that the proposed methodologies based upon missing data estimation tend to present better performance, constituting good alternatives to methodologies currently in use.
IEEE Transactions on Automation Science and Engineering | 2017
Tiago J. Rato; Jakey Blue; Jacques Pinaton; Marco S. Reis
The overwhelming majority of processes taking place in semiconductor manufacturing operate in a batch mode by imposing time-varying conditions to the products in a cyclic and repetitive fashion. These conditions make process monitoring a very challenging task, especially in massive production plants. Among the state-of-the-art approaches proposed to deal with this problem, the so-called multiway methods incorporate the batch dynamic features in a normal operation model at the expense of estimating a large number of parameters. This makes these approaches prone to overfitting and instability. Moreover, batch trajectories are required to be well aligned in order to provide the expected performance. To overcome these issues and other limitations of the conventional methodologies for process monitoring in semiconductor manufacturing, we propose an approach, translation-invariant multiscale energy-based principal component analysis, that requires a much lower number of estimated parameters. It is free of process trajectory alignment requirements and thus easier to implement and maintain, while still rendering useful information for fault detection and root cause analysis. The proposed approach is based on implementing a translation-invariant wavelet decomposition along the time series profile of each variable in one batch. The normal operational signatures in the time-frequency domain are extracted, modeled, and then used for process monitoring, allowing prompt detection of process abnormalities. The proposed procedure was tested with real industrial data and it proved to effectively detect the existing faults as well as to provide reliable indications of their underlying root causes.
Journal of Chemometrics | 2015
Tiago J. Rato; Marco S. Reis
We present a process monitoring scheme aimed at detecting changes in the networked structure of process data that is able to handle, simultaneously, three pervasive aspects of industrial systems: (i) their multivariate nature, with strong cross‐correlations linking the variables; (ii) the dynamic behavior of processes, as a consequence of the presence of inertial elements coupled with the high sampling rates of industrial acquisition systems; and (iii) the multiscale nature of systems, resulting from the superposition of multiple phenomena spanning different regions of the time‐frequency domain. Contrary to current approaches, the multivariate structure will be described through a local measure of association, the partial correlation, in order to improve the diagnosis features without compromising detection speed. It will also be used to infer the relevant causal structure active at each scale, providing a fine map for the complex behavior of the system. The scale‐dependent causal networks will be incorporated in multiscale monitoring through data‐driven sensitivity enhancing transformations (SETs). The results obtained demonstrate that the use of SET is a major factor in detecting process upsets. In fact, it was observed that even single‐scale monitoring methodologies can achieve comparable detection capabilities as their multiscale counterparts as long as a proper SET is employed. However, the multiscale approach still proved to be useful because it led to good results using a much simpler SET model of the system. Therefore, the application of wavelet transforms is advantageous for systems that are difficult to model, providing a good compromise between modeling complexity and monitoring performance. Copyright
Quality Engineering | 2016
Bart De Ketelaere; Tiago J. Rato; Eric Schmitt; Mia Hubert
ABSTRACT During the last decades, we evolved from measuring few process variables at sparse intervals to a situation in which a multitude of variables are measured at high speed. This evidently provides opportunities for extracting more information from processes and to pinpoint out-of-control situations, but transforming the large data streams into valuable information is still a challenging task. In this contribution we will focus on the analysis of time-dependent processes since this is the scenario most often encountered in practice, due to high sampling systems and the natural behavior of many real-life applications. The modeling and monitoring challenges that statistical process monitoring (SPM) techniques face in this situation will be described and possible routes will be provided. Simulation results as well as a real-life data set will be used throughout the article.
Journal of Chemometrics | 2016
Eric Schmitt; Tiago J. Rato; Bart De Ketelaere; Marco S. Reis; Mia Hubert
Methods based on principal component analysis (PCA) are widely used for statistical process monitoring of high‐dimensional processes. Allowing the monitoring model to update as new observations are acquired extends this class of approaches to non‐stationary processes. The updating procedure is governed by a weighting parameter that defines the rate at which older observations are discarded, and therefore, it greatly affects model quality and monitoring performance. Additionally, monitoring non‐stationary processes can require adjustments to the parameters defining the control limits of adaptive PCA in order to achieve the intended false detection rate. These two aspects require careful consideration prior the implementation of adaptive PCA. Towards this end, approaches are given in this paper for both parameter selection challenges. Results are presented for a simulation and two real‐life industrial process examples. Copyright
Journal of Chemometrics | 2018
Catarina P. Santos; Tiago J. Rato; Marco S. Reis
In a digital era where terabytes of structured and unstructured records are created and stored every minute, the importance of collecting small amounts of high quality data is often undervalued. However, this activity plays a critical role in industrial and laboratory settings, when addressing problems from process modeling and analysis, to optimization and robust design. Implementing a screening design is usually the way to begin a systematic statistical Design of Experiments program. Its aim is to find the influential factors and establish a first description of the process under analysis. Recently, new developments have emerged in this field, especially with the appearance of a new class of designs that join screening efficiency with the possibility to estimate quadratic effects (definitive screening designs). Therefore, several alternatives are currently available for conducting screening studies, but information is still scarce on the pros and cons of each methodology. This study was thus designed to gather useful information from the users perspective by independently considering each design with rather standard settings. For each design, its ability to recover the correct model structure was evaluated, through Monte Carlo simulations on several simulated process structures containing elements likely to be found in practice (they obey the general principles of effect sparsity, hierarchy, and heredity). As some screening designs can estimate quadratic terms, these effects were also considered in the simulated models, and three‐level designs were brought to the analysis for comparison purposes, even though they are not screening designs.
Archive | 2017
Tiago J. Rato; Marco S. Reis
Abstract Many of the fault detection and diagnosis frameworks currently used in complex industrial processes rely on the application of data-driven models. Among these methodologies, those based on principal component analysis (PCA) are particularly relevant due to its effectiveness in describing the normal operation conditions (NOC) in a parsimonious way, with resort to a reduced set of latent variables. However, PCA models are non-causal by nature and therefore fail to extract the intrinsic structure of the relationships between the variables, leading to limited fault diagnosis capabilities. To circumvent this limitation, we propose to implement a data-driven pre-processing module that codifies the causal structure of data and that can be easily plugged-in into current monitoring schemes. This pre-processing module makes use of a Sensitivity Enhancing Transformation (SET) that decorrelates the variables based on their causal structure, inferred through partial correlations. Therefore, deviations on the new decorrelated variables represent specific changes in the process structure, making fault diagnosis more transparent. To demonstrate the applicability of the proposed approach, two case studies are considered (CSTR and the Tennessee Eastman process). The results show that mapping the causal structure by means of the SET leads to a set of variables directly linked with the true source of the fault, providing a simple and effective way to improve fault detection and diagnosis.
Computer-aided chemical engineering | 2014
Tiago J. Rato; Marco S. Reis
Abstract A new pre-processing methodology is proposed for improving the detection capability to changes in process structure. It is named sensitivity enhancing transformation (SET), and uses information of the causal network topology underlying the measured process variables in order to construct a set of uncorrelated transformed variables around which the detection of changes in the variables correlation structure is maximized. A new group of monitoring statistics, based on partial correlations, is also presented that take full advantage of the SET features. The use of partial correlations as an association measure provides a finer map of the connectivity between process variables even without attributing any causal directionality. The availability of such a finer association map potentiates the development of more sensitive schemes for detecting structural changes, such as the ones proposed in this work. The results obtained in the comparison study involving other current methodologies for monitoring the correlation structure, show that the proposed methods are able to effectively detect changes in the systems structure and presented higher sensitivity when compared to the current monitoring statistics tested.
Computer-aided chemical engineering | 2012
Tiago J. Rato; Marco S. Reis
Abstract In this paper we address the problem of monitoring the performance of automatic control loops using data collected from the process. This topic has been gaining importance in recent years not only because the performance of controllers plays a central role in the quality and efficiency of processes, their safe operation and environmental impact, but also because it can be used in improvement activities, as it enables to trace back detected problems to malfunctions located at the level of controllers actions. In this paper, a modified controller performance index is proposed (IM), which is able to discern between perturbations originated at the systems core modules from those originated at the level of disturbances. It is a generalization of the current historical index (Iv), as it can be shown that it reduces to such index in the particular case where the variability of the disturbances is the same in the reference (or benchmark) period and in the monitoring period. The results obtained demonstrate that the proposed historical data benchmark index is able to maintain the target false alarm rate under situations where the variability of the disturbances increases, a situation where the current index, Iv, fails. When the disturbances variability is maintained, both indices present similar detection capability, as expected.
Chemometrics and Intelligent Laboratory Systems | 2013
Tiago J. Rato; Marco S. Reis