Asokan Mulayath Variyath
Memorial University of Newfoundland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Asokan Mulayath Variyath.
Nature Biotechnology | 2009
Terri Addona; Susan E. Abbatiello; Birgit Schilling; Steven J. Skates; D. R. Mani; David M. Bunk; Clifford H. Spiegelman; Lisa J. Zimmerman; Amy-Joan L. Ham; Hasmik Keshishian; Steven C. Hall; Simon Allen; Ronald K. Blackman; Christoph H. Borchers; Charles Buck; Michael P. Cusack; Nathan G. Dodder; Bradford W. Gibson; Jason M. Held; Tara Hiltke; Angela M. Jackson; Eric B. Johansen; Christopher R. Kinsinger; Jing Li; Mehdi Mesri; Thomas A. Neubert; Richard K. Niles; Trenton Pulsipher; David F. Ransohoff; Henry Rodriguez
Verification of candidate biomarkers relies upon specific, quantitative assays optimized for selective detection of target proteins, and is increasingly viewed as a critical step in the discovery pipeline that bridges unbiased biomarker discovery to preclinical validation. Although individual laboratories have demonstrated that multiple reaction monitoring (MRM) coupled with isotope dilution mass spectrometry can quantify candidate protein biomarkers in plasma, reproducibility and transferability of these assays between laboratories have not been demonstrated. We describe a multilaboratory study to assess reproducibility, recovery, linear dynamic range and limits of detection and quantification of multiplexed, MRM-based assays, conducted by NCI-CPTAC. Using common materials and standardized protocols, we demonstrate that these assays can be highly reproducible within and across laboratories and instrument platforms, and are sensitive to low μg/ml protein concentrations in unfractionated plasma. We provide data and benchmarks against which individual laboratories can compare their performance and evaluate new technologies for biomarker verification in plasma.
Journal of Proteome Research | 2010
David L. Tabb; Lorenzo Vega-Montoto; Paul A. Rudnick; Asokan Mulayath Variyath; Amy-Joan L. Ham; David M. Bunk; Lisa E. Kilpatrick; Dean Billheimer; Ronald K. Blackman; Steven A. Carr; Karl R. Clauser; Jacob D. Jaffe; Kevin A. Kowalski; Thomas A. Neubert; Fred E. Regnier; Birgit Schilling; Tony Tegeler; Mu Wang; Pei Wang; Jeffrey R. Whiteaker; Lisa J. Zimmerman; Susan J. Fisher; Bradford W. Gibson; Christopher R. Kinsinger; Mehdi Mesri; Henry Rodriguez; Stephen E. Stein; Paul Tempst; Amanda G. Paulovich; Daniel C. Liebler
The complexity of proteomic instrumentation for LC-MS/MS introduces many possible sources of variability. Data-dependent sampling of peptides constitutes a stochastic element at the heart of discovery proteomics. Although this variation impacts the identification of peptides, proteomic identifications are far from completely random. In this study, we analyzed interlaboratory data sets from the NCI Clinical Proteomic Technology Assessment for Cancer to examine repeatability and reproducibility in peptide and protein identifications. Included data spanned 144 LC-MS/MS experiments on four Thermo LTQ and four Orbitrap instruments. Samples included yeast lysate, the NCI-20 defined dynamic range protein mix, and the Sigma UPS 1 defined equimolar protein mix. Some of our findings reinforced conventional wisdom, such as repeatability and reproducibility being higher for proteins than for peptides. Most lessons from the data, however, were more subtle. Orbitraps proved capable of higher repeatability and reproducibility, but aberrant performance occasionally erased these gains. Even the simplest protein digestions yielded more peptide ions than LC-MS/MS could identify during a single experiment. We observed that peptide lists from pairs of technical replicates overlapped by 35-60%, giving a range for peptide-level repeatability in these experiments. Sample complexity did not appear to affect peptide identification repeatability, even as numbers of identified spectra changed by an order of magnitude. Statistical analysis of protein spectral counts revealed greater stability across technical replicates for Orbitraps, making them superior to LTQ instruments for biomarker candidate discovery. The most repeatable peptides were those corresponding to conventional tryptic cleavage sites, those that produced intense MS signals, and those that resulted from proteins generating many distinct peptides. Reproducibility among different instruments of the same type lagged behind repeatability of technical replicates on a single instrument by several percent. These findings reinforce the importance of evaluating repeatability as a fundamental characteristic of analytical technologies.
Molecular & Cellular Proteomics | 2010
Paul A. Rudnick; Karl R. Clauser; Lisa E. Kilpatrick; Dmitrii V. Tchekhovskoi; P. Neta; Nikša Blonder; Dean Billheimer; Ronald K. Blackman; David M. Bunk; Amy-Joan L. Ham; Jacob D. Jaffe; Christopher R. Kinsinger; Mehdi Mesri; Thomas A. Neubert; Birgit Schilling; David L. Tabb; Tony Tegeler; Lorenzo Vega-Montoto; Asokan Mulayath Variyath; Mu Wang; Pei Wang; Jeffrey R. Whiteaker; Lisa J. Zimmerman; Steven A. Carr; Susan J. Fisher; Bradford W. Gibson; Amanda G. Paulovich; Fred E. Regnier; Henry Rodriguez; Cliff Spiegelman
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications.
Molecular & Cellular Proteomics | 2010
Amanda G. Paulovich; Dean Billheimer; Amy-Joan L. Ham; Lorenzo Vega-Montoto; Paul A. Rudnick; David L. Tabb; Pei Wang; Ronald K. Blackman; David M. Bunk; Karl R. Clauser; Christopher R. Kinsinger; Birgit Schilling; Tony Tegeler; Asokan Mulayath Variyath; Mu Wang; Jeffrey R. Whiteaker; Lisa J. Zimmerman; David Fenyö; Steven A. Carr; Susan J. Fisher; Bradford W. Gibson; Mehdi Mesri; Thomas A. Neubert; Fred E. Regnier; Henry Rodriguez; Cliff Spiegelman; Stephen E. Stein; Paul Tempst; Daniel C. Liebler
Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments.
Journal of Computational and Graphical Statistics | 2008
Jiahua Chen; Asokan Mulayath Variyath; Bovas Abraham
Computing a profile empirical likelihood function, which involves constrained maximization, is a key step in applications of empirical likelihood. However, in some situations, the required numerical problem has no solution. In this case, the convention is to assign a zero value to the profile empirical likelihood. This strategy has at least two limitations. First, it is numerically difficult to determine that there is no solution; second, no information is provided on the relative plausibility of the parameter values where the likelihood is set to zero. In this article, we propose a novel adjustment to the empirical likelihood that retains all the optimality properties, and guarantees a sensible value of the likelihood at any parameter value. Coupled with this adjustment, we introduce an iterative algorithm that is guaranteed to converge. Our simulation indicates that the adjusted empirical likelihood is much faster to compute than the profile empirical likelihood. The confidence regions constructed via the adjusted empirical likelihood are found to have coverage probabilities closer to the nominal levels without employing complex procedures such as Bartlett correction or bootstrap calibration. The method is also shown empirical likelihood.
Journal of Quality Technology | 2009
Shoja’Eddin Chenouri; Stefan H. Steiner; Asokan Mulayath Variyath
To monitor a multivariate process, a classical Hotellings T2 control chart is often used. However, it is well known that such control charts are very sensitive to the presence of outlying observations in the historical Phase I data used to set the control limit. In this paper, we propose a robust Hotellings T2-type control chart for individual observations based on highly robust and efficient estimators of the mean vector and covariance matrix known as reweighted minimum covariance determinant (RMCD) estimators. We illustrate how to set the control limit for the proposed control chart, study its performance using simulations, and illustrate implementation in a real-world example.
Quality and Reliability Engineering International | 2011
Shojaeddin Chenouri; Asokan Mulayath Variyath
Use of Hotellings T2 charts with high breakdown robust estimates to monitor multivariate individual observations are the recent trend in the control chart methodology. Vargas (J. Qual. Tech. 2003; 35: 367-376) introduced Hotellings T2 charts based on the minimum volume ellipsoid (MVE) and the minimum covariance determinant (MCD) estimates to identify outliers in Phase I data. Studies carried out by Jensen et al. (Qual. Rel. Eng. Int. 2007; 23: 615-629) indicated that the performance of these charts heavily depends on the sample size, amount of outliers and the dimensionality of the Phase I data. Chenouri et al. (J. Qual. Tech. 2009; 41: 259-271) recently proposed robust Hotellings T2 control charts for monitoring Phase II data based on the reweighted MCD (RMCD) estimates of the mean vector and covariance matrix from Phase I. They showed that Phase II RMCD charts have better performance compared with Phase II standard Hotellings T2 charts based on outlier free Phase I data, where the outlier free Phase I data were obtained by applying MCD and MVE T2 charts to historical data. Reweighted MVE (RMVE) and S-estimators are two competitors of the RMCD estimators and it is a natural question whether the performance of Phase II Hotellings T2 charts with RMCD and RMVE estimates exhibits similar pattern observed by Jensen et al. (Qual. Rel. Eng. Int. 2007; 23: 615-629) in the case of MCD and MVE-based Phase I Hotellings T2 charts. In this paper, we conduct a comparative study to assess the performance of Hotellings T2 charts with RMCD, RMVE and S-estimators using large number of Monte Carlo simulations by considering different data scenarios. Our results are generally in favor of the RMCD-based charts irrespective of sample size, outliers and dimensionality of Phase I data. Copyright
Journal of Quality and Reliability Engineering | 2013
Asokan Mulayath Variyath; Jayasankar Vattathoor
Hotelings control charts are widely used in industries to monitor multivariate processes. The classical estimators, sample mean, and the sample covariance used in control charts are highly sensitive to the outliers in the data. In Phase-I monitoring, control limits are arrived at using historical data after identifying and removing the multivariate outliers. We propose Hotelings control charts with high-breakdown robust estimators based on the reweighted minimum covariance determinant (RMCD) and the reweighted minimum volume ellipsoid (RMVE) to monitor multivariate observations in Phase-I data. We assessed the performance of these robust control charts based on a large number of Monte Carlo simulations by considering different data scenarios and found that the proposed control charts have better performance compared to existing methods.
Quality and Reliability Engineering International | 2014
Asokan Mulayath Variyath; Jayasankar Vattathoor
Multivariate control charts are widely used in various industries to monitor the shifts in process mean and process variability. In Phase I monitoring, control limits are computed using the historical data, and control charts based on classical estimators (sample mean and the sample covariance) are highly sensitive to the outliers in the data. We propose robust control charts with high breakdown robust estimators based on the re-weighted minimum covariance determinant and the re-weighted minimum volume ellipsoid to monitor the process variability of multivariate individual observations in Phase I data under multivariate exponentially weighted mean square error and multivariate exponentially weighted moving variance schemes. The control limits are computed empirically, and the performance of the proposed charts is assessed with Monte Carlo simulations by considering different data scenarios. The proposed robust control charts are shown to perform better than charts based on classical estimators. Copyright
Journal of Quality Technology | 2005
Asokan Mulayath Variyath; Bovas Abraham; Jiahua Chen
Experimental designs with performance measures as responses are common in industrial applications. The existing analysis methods often regard performance measures as sole response variables without replicates. Consequently, no degrees of freedom are left for error variance estimation in these methods. In reality, performance measures are obtained from replicated primary-response variables. Precious information is hence lost. In this paper, we suggest a jackknife-based approach on the replicated primary responses to provide an estimate of error variance of performance measures. The resulting tests for factor effects become easy to construct and more reliable. We compare the proposed method with some existing methods using two real examples and investigate the consistency of the jackknife variance estimate based on simulation studies.