Niranjan A. Subrahmanya
ExxonMobil
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Niranjan A. Subrahmanya.
Automatica | 2009
Niranjan A. Subrahmanya; Yung C. Shin
A novel adaptive version of the divided difference filter (DDF) applicable to non-linear systems with a linear output equation is presented in this work. In order to make the filter robust to modeling errors, upper bounds on the state covariance matrix are derived. The parameters of this upper bound are then estimated using a combination of offline tuning and online optimization with a linear matrix inequality (LMI) constraint, which ensures that the predicted output error covariance is larger than the observed output error covariance. The resulting sub-optimal, high-gain filter is applied to the problem of joint state and parameter estimation. Simulation results demonstrate the superior performance of the proposed filter as compared to the standard DDF.
Neurocomputing | 2010
Niranjan A. Subrahmanya; Yung C. Shin
Training of recurrent neural networks (RNNs) is known to be a very difficult task. This work proposes a novel constructive method for simultaneous structure and parameter training of Elman-type RNNs using a combination of particle swarm optimization (PSO) and covariance matrix adaptation based evolutionary strategy (CMA-ES). The proposed method allows the imposition of certain stability conditions, which can be maintained throughout the constructive process. The examples reported show a monotonic decrease in training error throughout the constructive process and also demonstrate the efficiency of the proposed method for structure and parameter training of RNNs.
IEEE Transactions on Industrial Informatics | 2013
Mihajlo Grbovic; Weichang Li; Niranjan A. Subrahmanya; Adam K. Usadi; Slobodan Vucetic
A typical assumption in supervised fault detection is that abundant historical data are available prior to model learning, where all types of faults have already been observed at least once. This assumption is likely to be violated in practical settings as new fault types can emerge over time. In this paper we study this often overlooked cold start learning problem in data-driven fault detection, where in the beginning only normal operation data are available and faulty operation data become available as the faults occur. We explored how to leverage strengths of unsupervised and supervised approaches to build a model capable of detecting faults even if none are still observed, and of improving over time, as new fault types are observed. The proposed framework was evaluated on the benchmark Tennessee Eastman Process data. The proposed fusion model performed better on both unseen and seen faults than the stand-alone unsupervised and supervised models.
Engineering Applications of Artificial Intelligence | 2013
Niranjan A. Subrahmanya; Yung C. Shin
A novel framework based on the use of dynamic neural networks for data-based process monitoring, fault detection and diagnostics of non-linear systems with partial state measurement is presented in this paper. The proposed framework considers the presence of three kinds of states in a generic system model: states that can easily be measured in real time and in-situ, states that are difficult to measure online but can be measured offline to generate training data, and states that cannot be measured at all. The motivation for such a categorization of state variables comes from a wide class of problems in the manufacturing and chemical industries, wherein certain states are not measurable without expensive equipments or offline analysis while some other states may not be accessible at all. The framework makes use of a recurrent neural network for modeling the hidden dynamics of the system from available measurements and uses this model along with a non-linear observer to augment the information provided by the measured variables. The performance of the proposed method is verified on a synthetic problem as well as a benchmark simulation problem.
International Journal of Machine Learning and Cybernetics | 2013
Niranjan A. Subrahmanya; Yung C. Shin
In many machine learning and pattern analysis applications, grouping of features during model development and the selection of a small number of relevant groups can be useful to improve the interpretability of the learned parameters. Although this problem has been receiving a significant amount of attention lately, most of the approaches require the manual tuning of one or more hyper-parameters. In order to overcome this drawback, this work presents a novel hierarchical Bayesian formulation of a generalized linear model and estimates the posterior distribution of the parameters and hyper-parameters of the model within a completely Bayesian paradigm based on variational inference. All the required computations are analytically tractable. The performance and applicability of the proposed framework is demonstrated on synthetic and real world examples.
sensor array and multichannel signal processing workshop | 2012
Weichang Li; Niranjan A. Subrahmanya; Feng Xu
This paper proposes a class of joint subspace and sparse filtering algorithms with an example application in tracking moving targets in highly reverberant environment. Motivated by recent work in low-rank and sparse matrix decomposition, we have developed filtering algorithms that alternate between tracking the low-rank subspace and estimating the instantaneously sparse components, both of which are recursively updated as new data arrives. The algorithms are particularly suitable for online applications with streaming data or sequential processing of extremely large data sets for which matrix decomposition is computationally infeasible. In contrast to simple signal and noise subspace decomposition in traditional subspace processing, the algorithms we describe here assume a generative model consisting of a low-rank subspace, an additional sparse component and noise. This approach is well suited for tracking a sparse moving target signal in the presence of low-rank reverberations. We demonstrate the target tracking performance via a set of beam space field data.
Computer-aided chemical engineering | 2012
Shiva Kameswaran; Niranjan A. Subrahmanya
Abstract Model predictive control (MPC) is based on repeated solution of finite horizon dynamic optimization problems, where only the control decisions close to the start of the time horizon are of interest. This observation suggests the potential use of multi-fidelity models with models of reduced complexity and fidelity being used for predictions away from the current time instant. We explore this idea through theoretical analysis and simulations. We attempt to quantify the effect of (bounded) error in computing future states and controls on current control actions. The assumption of bounded error covers a number of potential approaches based on model and solution approximation techniques. One of these approaches, using reduced order models, is illustrated using a simulated process. We provide comparisons with conventional MPC in terms of the overall quality of the control as well as the computational complexity and demonstrate that MPC based on multi-fidelity models performs better than MPC based only on reduced order models.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010
Niranjan A. Subrahmanya; Yung C. Shin
Journal of Manufacturing Science and Engineering-transactions of The Asme | 2008
Niranjan A. Subrahmanya; Yung C. Shin
Archive | 2011
Krishnan Kumaran; Niranjan A. Subrahmanya; Pavlin B. Entchev; Randy C. Tolman; Renzo M. Angeles Boza