Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Surajit Ray is active.

Publication


Featured researches published by Surajit Ray.


BMC Immunology | 2008

Evaluation of MHC class I peptide binding prediction servers: Applications for vaccine research

Honghuang Lin; Surajit Ray; Songsak Tongchusak; Ellis L. Reinherz; Vladimir Brusic

BackgroundProtein antigens and their specific epitopes are formulation targets for epitope-based vaccines. A number of prediction servers are available for identification of peptides that bind major histocompatibility complex class I (MHC-I) molecules. The lack of standardized methodology and large number of human MHC-I molecules make the selection of appropriate prediction servers difficult. This study reports a comparative evaluation of thirty prediction servers for seven human MHC-I molecules.ResultsOf 147 individual predictors 39 have shown excellent, 47 good, 33 marginal, and 28 poor ability to classify binders from non-binders. The classifiers for HLA-A*0201, A*0301, A*1101, B*0702, B*0801, and B*1501 have excellent, and for A*2402 moderate classification accuracy. Sixteen prediction servers predict peptide binding affinity to MHC-I molecules with high accuracy; correlation coefficients ranging from r = 0.55 (B*0801) to r = 0.87 (A*0201).ConclusionNon-linear predictors outperform matrix-based predictors. Most predictors can be improved by non-linear transformations of their raw prediction scores. The best predictors of peptide binding are also best in prediction of T-cell epitopes. We propose a new standard for MHC-I binding prediction – a common scale for normalization of prediction scores, applicable to both experimental and predicted data. The results of this study provide assistance to researchers in selection of most adequate prediction tools and selection criteria that suit the needs of their projects.


Annals of Statistics | 2005

The topography of multivariate normal mixtures

Surajit Ray; Bruce G. Lindsay

Multivariate normal mixtures provide a flexible method of fitting high-dimensional data. It is shown that their topography, in the sense of their key features as a density. can be analyzed rigorously in lower dimensions by use of a ridgeline manifold that contains all critical points, as well as the ridges of the density. A plot of the elevations on the ridgeline shows the key features of the mixed density. In addition. by use of the ridgeline, we uncover a function that determines the number of modes of the mixed density when there are two components being mixed. A followup analysis then gives a curvature function that can be used to prove a set of modality theorems.


Structural Equation Modeling | 2014

BIC and Alternative Bayesian Information Criteria in the Selection of Structural Equation Models

Kenneth A. Bollen; Jeffrey J. Harden; Surajit Ray; Jane R. Zavisca

Selecting between competing structural equation models is a common problem. Often selection is based on the chi-square test statistic or other fit indices. In other areas of statistical research Bayesian information criteria are commonly used, but they are less frequently used with structural equation models compared to other fit indices. This article examines several new and old information criteria (IC) that approximate Bayes factors. We compare these IC measures to common fit indices in a simulation that includes the true and false models. In moderate to large samples, the IC measures outperform the fit indices. In a second simulation we only consider the IC measures and do not include the true model. In moderate to large samples the IC measures favor approximate models that only differ from the true model by having extra parameters. Overall, SPBIC, a new IC measure, performs well relative to the other IC measures.


BMC Bioinformatics | 2011

Top scoring pairs for feature selection in machine learning and applications to cancer outcome prediction

Ping Shi; Surajit Ray; Qifu Zhu; Mark A. Kon

BackgroundThe widely used k top scoring pair (k-TSP) algorithm is a simple yet powerful parameter-free classifier. It owes its success in many cancer microarray datasets to an effective feature selection algorithm that is based on relative expression ordering of gene pairs. However, its general robustness does not extend to some difficult datasets, such as those involving cancer outcome prediction, which may be due to the relatively simple voting scheme used by the classifier. We believe that the performance can be enhanced by separating its effective feature selection component and combining it with a powerful classifier such as the support vector machine (SVM). More generally the top scoring pairs generated by the k-TSP ranking algorithm can be used as a dimensionally reduced subspace for other machine learning classifiers.ResultsWe developed an approach integrating the k-TSP ranking algorithm (TSP) with other machine learning methods, allowing combination of the computationally efficient, multivariate feature ranking of k-TSP with multivariate classifiers such as SVM. We evaluated this hybrid scheme (k-TSP+SVM) in a range of simulated datasets with known data structures. As compared with other feature selection methods, such as a univariate method similar to Fishers discriminant criterion (Fisher), or a recursive feature elimination embedded in SVM (RFE), TSP is increasingly more effective than the other two methods as the informative genes become progressively more correlated, which is demonstrated both in terms of the classification performance and the ability to recover true informative genes. We also applied this hybrid scheme to four cancer prognosis datasets, in which k-TSP+SVM outperforms k-TSP classifier in all datasets, and achieves either comparable or superior performance to that using SVM alone. In concurrence with what is observed in simulation, TSP appears to be a better feature selector than Fisher and RFE in some of the cancer datasetsConclusionsThe k-TSP ranking algorithm can be used as a computationally efficient, multivariate filter method for feature selection in machine learning. SVM in combination with k-TSP ranking algorithm outperforms k-TSP and SVM alone in simulated datasets and in some cancer prognosis datasets. Simulation studies suggest that as a feature selector, it is better tuned to certain data characteristics, i.e. correlations among informative genes, which is potentially interesting as an alternative feature ranking method in pathway analysis.


Annals of Statistics | 2008

Quadratic distances on probabilities: A unified foundation

Bruce G. Lindsay; Marianthi Markatou; Surajit Ray; Ke Yang; Shu-Chuan Chen

This work builds a unified framework for the study of quadratic form distance measures as they are used in assessing the goodness of fit of models. Many important procedures have this structure, but the theory for these methods is dispersed and incomplete. Central to the statistical analysis of these distances is the spectral decomposition of the kernel that generates the distance. We show how this determines the limiting distribution of natural goodness-of-fit tests. Additionally, we develop a new notion, the spectral degrees of freedom of the test, based on this decomposition. The degrees of freedom are easy to compute and estimate, and can be used as a guide in the construction of useful procedures in this class.


Statistics and Computing | 2017

Functional principal component analysis of spatially correlated data

Chong Liu; Surajit Ray; Giles Hooker

This paper focuses on the analysis of spatially correlated functional data. We propose a parametric model for spatial correlation and the between-curve correlation is modeled by correlating functional principal component scores of the functional data. Additionally, in the sparse observation framework, we propose a novel approach of spatial principal analysis by conditional expectation to explicitly estimate spatial correlations and reconstruct individual curves. Assuming spatial stationarity, empirical spatial correlations are calculated as the ratio of eigenvalues of the smoothed covariance surface Cov


PLOS ONE | 2012

A Computational Framework to Emulate the Human Perspective in Flow Cytometric Data Analysis

Surajit Ray; Saumyadipta Pyne


Sociological Methods & Research | 2012

A Comparison of Bayes Factor Approximation Methods Including Two New Methods

Kenneth A. Bollen; Surajit Ray; Jane R. Zavisca; Jeffrey J. Harden

(X_i(s),X_i(t))


Methods of Molecular Biology | 2011

Data processing and analysis for protein microarrays.

David S. DeLuca; Ovidiu Marina; Surajit Ray; Guang Lan Zhang; Catherine J. Wu; Vladimir Brusic


Immunome Research | 2007

Amino acid biophysical properties in the statistical prediction of peptide-MHC class I binding

Surajit Ray; Thomas B. Kepler

(Xi(s),Xi(t)) and cross-covariance surface Cov

Collaboration


Dive into the Surajit Ray's collaboration.

Top Co-Authors

Avatar

Bruce G. Lindsay

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Pizer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Edward L. Chaney

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Joshua H. Levy

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Robert E. Broadhurst

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey J. Harden

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Kenneth A. Bollen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge