Shankar Narasimhan
Indian Institute of Technology Madras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shankar Narasimhan.
Computers & Chemical Engineering | 1998
Sujoy Sen; Shankar Narasimhan; Kalyanmoy Deb
A generalized sensor network design algorithm for finding the optimal placement of sensors in a linear mass flow process has been developed and implemented. The algorithm developed in this work is based on a combination of concepts drawn from graph theory and genetic algorithms. The sensor network is designed to optimize a single criterion of cost, reliability or estimation accuracy, using a minimum number of sensors. Application to a steam-metering network of a methanol plant demonstrates the versatility of this method.
Control Engineering Practice | 2008
Shankar Narasimhan; Sirish L. Shah
Principal Components Analysis (PCA) is increasingly being used for reducing the dimensionality of multivariate data, process monitoring, model identification, and fault diagnosis. However, in the mode that PCA is currently used, it can be statistically justified only if measurement errors in different variables are assumed to be i.i.d. In this paper, we develop the theoretical basis and an iterative algorithm for model identification using PCA, when measurement errors in different variables are unequal and are correlated. The proposed approach not only gives accurate estimates of both the model and error covariance matrix, but also provides answers to the two important issues of data scaling and model order determination.
Computers & Chemical Engineering | 2011
Kris Villez; Babji Srinivasan; Raghunathan Rengaswamy; Shankar Narasimhan; Venkat Venkatasubramanian
This paper is concerned with the application of Kalman filter based methods for Fault Detection and Identification (FDI). The original Kalman based method, formulated for bias faults only, is extended for three more fault types, namely the actuator or sensor being stuck, sticky or drifting. To benchmark the proposed method, a nonlinear buffer tank system is simulated as well as its linearized version. This method based on the Kalman filter delivers good results for the linear version of the system and much worse for the nonlinear version, as expected. To alleviate this problem, the Extended Kalman Filter (EKF) is investigated as a better alternative to the Kalman filter. Next to the evaluation of detection and diagnosis performance for several faults, the effect of dynamics on fault identification and diagnosis as well as the effect of including the time of fault occurrence as a parameter in the diagnosis task are investigated.
Computers & Chemical Engineering | 2002
K. Shivakumar; Shankar Narasimhan
The simultaneous heat exchanger network (HEN) synthesis optimization problem is generally formulated as a mixed integer non-linear program (MINLP) by using the concept of HEN superstructures. Although non-linear program (NLP) formulations have also been proposed, they suffer from some limitations. In this work, a new formulation of the simultaneous approach for HEN synthesis is proposed by representing the HEN superstructure as a process graph. This representation allows any HEN network to be evolved by circulating appropriate enthalpy flows around a set of independent loops of the process graph. By exploiting this feature, a robust and efficient NLP problem is formulated. Compared to MINLP formulations a significant reduction in terms of the size of the problem is achieved. The proposed formulation can handle fixed charges of exchangers and also design constraints such as restricted, required and forbidden matches, variable target temperatures etc. The robustness and efficiency of our formulation is demonstrated through several examples for different initial solutions including those obtained using Pinch Technology for both superstructures proposed in the literature.
Computers & Chemical Engineering | 2011
H. Prashanth Reddy; Shankar Narasimhan; S. Murty Bhallamudi; S. Bairagi
Dynamic simulation models can be used along with flow and pressure measurements, for on-line leak detection and identification in gas pipeline networks. In this two part paper, a methodology is proposed for detecting and localizing leaks occurring in gas pipelines. The main features of the proposed methodology are: (i) it is applicable to both single pipelines and pipeline networks and (ii) it considers non-ideal gas mixtures. In order to achieve the desired computational efficiency for on-line deployment, an efficient state estimation technique based on a transfer function model, previously developed by the authors, is embedded in a hypothesis testing framework. In Part-I of this paper, a detailed description of the methodology is presented, and its performance is evaluated using simulations on two illustrative pipeline systems. The proposed method is shown to perform satisfactorily even with noisy measurements and during transient conditions, provided there is sufficient redundancy in the measurements.
Computers & Chemical Engineering | 1993
P. Harikumar; Shankar Narasimhan
Abstract In Part I (Computers chem. Engng 17, 1115–1120), a procedure was described for solving the data reconciliation problem which includes bounds on process variables. The statistical distributions of the constraint and measurement residuals were also derived. In this paper, two methods are proposed for gross error detection that make use of the results of Part I. One of the methods makes use of bound information in gross error detection while the other does not. The sequence in which gross error detection and data reconciliation are performed is also different in the two methods. Simulation results show that compared to currently available methods, the proposed methods give better gross error detection performance and more accurate estimates which always satisfy the bounds especially when tight bounds are specified.
Computers & Chemical Engineering | 1993
Shankar Narasimhan; P. Harikumar
Abstract Data reconciliation and gross error detection techniques can be improved by exploiting information from bounds on process variables. In this paper a new approach is proposed for incorporating upper and lower bounds on process variables in data reconciliation and gross error detection. Bounds on process variables are directly incorporated as constraints in data reconciliation and the resulting problem is solved using an efficient quadratic programming algorithm. More importantly, a method to obtain the statistical distributions of measurement residuals and constraint residuals has been developed which is useful for gross error detection. Gross error detection methods based on this approach are described in Part II of this series (Comput. chem. Engng 17, 1121–1128).
IEEE Transactions on Automatic Control | 2013
Raghunathan Rengaswamy; Shankar Narasimhan; Vidyashankar Kuppuraj
This technical note presents a new Receding-horizon Nonlinear Kalman (RNK) filter for state estimation in nonlinear systems with state constraints. Such problems appear in almost all engineering disciplines. Unlike the Moving Horizon Estimation (MHE) approach, the RNK Filter formulation follows the Kalman Filter (KF) predictor-corrector framework. The corrector step is solved as an optimization problem that handles constraints effectively. The performance improvement and robustness of the proposed estimator vis-a-vis the extended Kalman filter (EKF) are demonstrated through nonlinear examples. These examples also demonstrate the computational advantages of the proposed approach over the MHE formulation. The computational gain is due to the fact that the proposed RNK formulation avoids the repeated integration within an optimization loop that is required in an MHE formulation. Further, the proposed formulation results in a quadratic program (QP) problem for the corrector step when the measurement model is linear, irrespective of the state propagation model. In contrast, a nonlinear programming problem (NLP) needs to be solved when an MHE formulation is used for such problems. Also, the proposed filter for unconstrained linear systems results in a KF estimate for the current instant and smoothed estimates for the other instants of the receding horizon.
Computers & Chemical Engineering | 2015
Shankar Narasimhan; Nirav Bhatt
Abstract Data reconciliation (DR) and principal component analysis (PCA) are two popular data analysis techniques in process industries. Data reconciliation is used to obtain accurate and consistent estimates of variables and parameters from erroneous measurements. PCA is primarily used as a method for reducing the dimensionality of high dimensional data and as a preprocessing technique for denoising measurements. These techniques have been developed and deployed independently of each other. The primary purpose of this article is to elucidate the close relationship between these two seemingly disparate techniques. This leads to a unified framework for applying PCA and DR. Further, we show how the two techniques can be deployed together in a collaborative and consistent manner to process data. The framework has been extended to deal with partially measured systems and to incorporate partial knowledge available about the process model.
Computer-aided chemical engineering | 2015
Sriniketh Srinivasan; Julien Billeter; Shankar Narasimhan; Dominique Bonvin
Abstract of the conference paper Concentrations measured during the course of a chemical reaction are corrupted with noise, which reduces the quality of information. Since these measurements are used for identifying kinetic models, the noise impairs the ability to identify accurate models. The noise in concentration measurements can be reduced using data reconciliation, exploiting for example the material balances derived from stoichiometry as constraints. However, additional constraints can be obtained via the transformation of concentrations into extents and invariants, which leads to more efficient identification of kinetic models for multiple reaction systems. This paper uses the transformation to extents and invariants and formulates the data reconciliation problem accordingly. This formulation has the advantage that non-negativity and monotonicity constraints can be imposed on selected extents. A simulated example is used to demonstrate that reconciled measurements lead to the identification of more accurate kinetic models. Extended abstract Reliable kinetic models of chemical reaction systems should include information on all rate processes of significance in the system. Apart from chemical reactions, such models should also describe the mass exchanged with the environment via the inlet and outlet streams and the mass transferred between phases. Model identification and the estimation of rate parameters is carried out using measurements that are obtained during the course of the reaction [1]. Model identification often leads to the combinatorial complexity of identifying simultaneously all rate processes [1]. Alternatively, it can be carried out incrementally by transforming the concentrations to extents and identifying each extent separately [2]. Since measurements are inevitably corrupted by random measurement errors, the identification of kinetic models and estimation of rate parameters are affected by error propagation [3]. Data reconciliation is a technique that uses constraints to obtain more accurate estimates of variables by reducing the effect of measurement errors [4]. Data reconciliation can be formulated as an optimization problem constrained by the law of conservation of mass [5, 6] and positivity of reconciled concentrations. Consequently, model identification can be performed with reconciled concentrations. This paper presents a reformulation of the original reconciliation problem directly in terms of extents. This allows using additional constraints such as the monotonicity of extents. Such a reformulation improves the accuracy of the reconciled extents and hence of concentrations, and leads to better model discrimination and parameter estimation. The advantages derived from the use of reconciled extents are illustrated using a simulated example. References: [1] Bardow et al., Chem. Eng. Sci., 2004, 59, 2673 - 2684 [2] Bhatt et al., AIChE J., 2010, 56, 2873 - 2886 [3] Billeter et al., Chem. Intell. Lab. Syst., 2008, 93, 120 - 131 [4] S. Narasimhan and C. Jordache, Data Reconciliation and Gross Error Detection, Elsevier, 1999 [5] Reklaitis et al., Chem. Eng. Sci., 1975, 30, 243 - 247 [6] Srinivasan et al., IFAC Workshop on Thermodynamic Foundations of Mathematical Systems Theory, Lyon, 2013.