Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael W. Trosset is active.

Publication


Featured researches published by Michael W. Trosset.


Computational Statistics & Data Analysis | 2008

The out-of-sample problem for classical multidimensional scaling

Michael W. Trosset; Carey E. Priebe

Out-of-sample embedding techniques insert additional points into previously constructed configurations. An out-of-sample extension of classical multidimensional scaling is presented. The out-of-sample extension is formulated as an unconstrained nonlinear least-squares problem. The objective function is a fourth-order polynomial, easily minimized by standard gradient-based methods for numerical optimization. Two examples are presented.


Computational Optimization and Applications | 2014

Parallel deterministic and stochastic global minimization of functions with very many minima

David R. Easterling; Layne T. Watson; Michael L. Madigan; Brent S. Castle; Michael W. Trosset

The optimization of three problems with high dimensionality and many local minima are investigated under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO package, and QNSTOP, a new algorithm developed at Indiana University.


acm southeast regional conference | 2007

Multidimensional numerical integration for robust design optimization

Sean C. Kugele; Layne T. Watson; Michael W. Trosset

Engineers increasingly rely on computer simulation to develop new products and to understand emerging technologies. In practice, this process is permeated with uncertainty. Most of the computational tools developed for design optimization ignore or abuse the issue of uncertainty, whereas traditional methods for managing uncertainty are often prohibitively expensive. The ultimate goal of this work is the development of tractable computational tools that address these realities. As a small first step towards this goal, this paper explores the computational cost of multidimensional integration (computing expectation) for robust design optimization.


southeastcon | 2007

Interplay of numerical integration with gradient based optimization algorithms for robust design optimization

Sean C. Kugele; Layne T. Watson; Michael W. Trosset

Engineers increasingly rely on computer simulation to develop new products and to understand emerging technologies. In practice, this process is permeated with uncertainty. Most of the computational tools developed for design optimization ignore or abuse the issue of uncertainty, whereas traditional methods for managing uncertainty are often prohibitively expensive. The ultimate goal of this work is the development of tractable computational tools that address these realities. As a small first step towards this goal, this paper explores the interaction between numerical integrators and gradient based optimizers for robust design optimization.


Journal of Classification | 2010

Dimensionality Reduction on the Cartesian Product of Embeddings of Multiple Dissimilarity Matrices

Zhiliang Ma; Adam Cardinal-Stakenas; Youngser Park; Michael W. Trosset; Carey E. Priebe

We consider the problem of combining multiple dissimilarity representations via the Cartesian product of their embeddings. For concreteness, we choose the inferential task at hand to be classification. The high dimensionality of this Cartesian product space implies the necessity of dimensionality reduction before training a classifier. We propose a supervised dimensionality reduction method, which utilizes the class label information, to help achieve a favorable combination. The simulation and real data results show that our approach can improve classification accuracy compared to the alternatives of principal components analysis and no dimensionality reduction at all.


Journal of Computational and Graphical Statistics | 2017

Fast Embedding for JOFC Using the Raw Stress Criterion

Vince Lyzinski; Youngser Park; Carey E. Priebe; Michael W. Trosset

ABSTRACT The joint optimization of fidelity and commensurability (JOFC) manifold matching methodology embeds an omnibus dissimilarity matrix consisting of multiple dissimilarities on the same set of objects. One approach to this embedding optimizes the preservation of fidelity to each individual dissimilarity matrix together with commensurability of each given observation across modalities via iterative majorization of a raw stress error criterion by successive Guttman transforms. In this article, we exploit the special structure inherent to JOFC to exactly and efficiently compute the successive Guttman transforms, and as a result we are able to greatly speed up the JOFC procedure for both in-sample and out-of-sample embedding. We demonstrate the scalability of our implementation on both real and simulated data examples.


IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2017

Quasi-Newton Stochastic Optimization Algorithm for Parameter Estimation of a Stochastic Model of the Budding Yeast Cell Cycle

Minghan Chen; Brandon D. Amos; Layne T. Watson; John J. Tyson; Yang Cao; Cliff Shaffer; Michael W. Trosset; Cihan Oguz; Gisella Kakoti

Parameter estimation in discrete or continuous deterministic cell cycle models is challenging for several reasons, including the nature of what can be observed, and the accuracy and quantity of those observations. The challenge is even greater for stochastic models, where the number of simulations and amount of empirical data must be even larger to obtain statistically valid parameter estimates. The two main contributions of this work are (1) stochastic model parameter estimation based on directly matching multivariate probability distributions, and (2) a new quasi-Newton algorithm class QNSTOP for stochastic optimization problems. QNSTOP directly uses the random objective function value samples rather than creating ensemble statistics. QNSTOP is used here to directly match empirical and simulated joint probability distributions rather than matching summary statistics. Results are given for a current state-of-the-art stochastic cell cycle model of budding yeast, whose predictions match well some summary statistics and one-dimensional distributions from empirical data, but do not match well the empirical joint distributions. The nature of the mismatch provides insight into the weakness in the stochastic model.


Computational Statistics archive | 2008

Iterative Denoising

Kendall Giles; Michael W. Trosset; David J. Marchette; Carey E. Priebe

One problem in many fields is knowledge discovery in heterogeneous, high-dimensional data. As an example, in text mining an analyst often wishes to identify meaningful, implicit, and previously unknown information in an unstructured corpus. Lack of metadata and the complexities of document space make this task difficult. We describe Iterative Denoising, a methodology for knowledge discovery in large heterogeneous datasets that allows a user to visualize and to discover potentially meaningful relationships and structures. In addition, we demonstrate the features of this methodology in the analysis of a heterogeneous Science News corpus.


parallel computing | 2013

Adjusting process count on demand for petascale global optimization

Masha Sosonkina; Layne T. Watson; Nicholas R. Radcliffe; Raphael T. Haftka; Michael W. Trosset

There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.


Computational Statistics & Data Analysis | 2008

Semisupervised learning from dissimilarity data

Michael W. Trosset; Carey E. Priebe; Youngser Park; Michael I. Miller

Collaboration


Dive into the Michael W. Trosset's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brent S. Castle

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Youngser Park

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

David R. Easterling

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael L. Madigan

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Predrag Radivojac

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Shantanu Jain

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge