Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghu Pasupathy is active.

Publication


Featured researches published by Raghu Pasupathy.


Operations Research | 2010

On Choosing Parameters in Retrospective-Approximation Algorithms for Stochastic Root Finding and Simulation Optimization

Raghu Pasupathy

The stochastic root-finding problem is that of finding a zero of a vector-valued function known only through a stochastic simulation. The simulation-optimization problem is that of locating a real-valued functions minimum, again with only a stochastic simulation that generates function estimates. Retrospective approximation (RA) is a sample-path technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RAs iterates. Then we characterize a class of error-tolerance and sample-size sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class and provide a numerical example illustrating the key results.


Informs Journal on Computing | 2013

Optimal Sampling Laws for Stochastically Constrained Simulation Optimization on Finite Sets

Susan R. Hunter; Raghu Pasupathy

Consider the context of selecting an optimal system from among a finite set of competing systems, based on a “stochastic” objective function and subject to multiple “stochastic” constraints. In this context, we characterize the asymptotically optimal sample allocation that maximizes the rate at which the probability of false selection tends to zero. Since the optimal allocation is the result of a concave maximization problem, its solution is particularly easy to obtain in contexts where the underlying distributions are known or can be assumed. We provide a consistent estimator for the optimal allocation and a corresponding sequential algorithm fit for implementation. Various numerical examples demonstrate how the proposed allocation differs from competing algorithms.


Archive | 2015

A Guide to Sample Average Approximation

Sujin Kim; Raghu Pasupathy; Shane G. Henderson

This chapter reviews the principles of sample average approximation (SAA) for solving simulation optimization problems. We provide an accessible overview of the area and survey interesting recent developments. We explain when one might want to use SAA and when one might expect it to provide good-quality solutions. We also review some of the key theoretical properties of the solutions obtained through SAA. We contrast SAA with stochastic approximation (SA) methods in terms of the computational effort required to obtain solutions of a given quality, explaining why SA “wins” asymptotically. However, an extension of SAA known as retrospective optimization can match the asymptotic convergence rate of SA, at least up to a multiplicative constant.


winter simulation conference | 2006

A testbed of simulation-optimization problems

Raghu Pasupathy; Shane G. Henderson

We propose a testbed of simulation-optimization problems. The purpose of the testbed is to encourage development and constructive comparison of simulation-optimization techniques and algorithms. We are particularly interested in increasing attention to the finite-time performance of algorithms, rather than the asymptotic results that one often finds in the literature


ACM Transactions on Modeling and Computer Simulation | 2011

The stochastic root-finding problem: Overview, solutions, and open questions

Raghu Pasupathy; Sujin Kim

The stochastic root-finding problem (SRFP) is that of finding the zero(s) of a vector function, that is, solving a nonlinear system of equations when the function is expressed implicitly through a stochastic simulation. SRFPs are equivalently expressed as stochastic fixed-point problems, where the underlying function is expressed implicitly via a noisy simulation. After motivating SRFPs using a few examples, we review available methods to solve such problems on constrained Euclidean spaces. We present the current literature as three broad categories, and detail the basic theoretical results that are currently known in each of the categories. With a view towards helping the practitioner, we discuss specific variations in their implementable form, and provide references to computer code when easily available. Finally, we list a few questions that are worthwhile research pursuits from the standpoint of advancing our knowledge of the theoretical underpinnings and the implementation aspects of solutions to SRFPs.


ACM Transactions on Modeling and Computer Simulation | 2015

Stochastically Constrained Ranking and Selection via SCORE

Raghu Pasupathy; Susan R. Hunter; Nugroho Artadi Pujowidianto; Loo Hay Lee; Chun-Hung Chen

Consider the context of constrained Simulation Optimization (SO); that is, optimization problems where the objective and constraint functions are known through dependent Monte Carlo estimators. For solving such problems on large finite spaces, we provide an easily implemented sampling framework called SCORE (Sampling Criteria for Optimization using Rate Estimators) that approximates the optimal simulation budget allocation. We develop a general theory, but, like much of the existing literature on ranking and selection, our focus is on SO problems where the distribution of the simulation observations is Gaussian. We first characterize the nature of the optimal simulation budget as a bi-level optimization problem. We then show that under a certain asymptotic limit, the solution to the bi-level optimization problem becomes surprisingly tractable and is expressed through a single intuitive measure, the score. We provide an iterative SO algorithm that repeatedly estimates the score and determines how the available simulation budget should be expended across contending systems. Numerical experience with the algorithm resulting from the proposed sampling approximation is very encouraging—in numerous examples of constrained SO problems having 1,000 to 10,000 systems, the optimal allocation is identified to negligible error within a few seconds to 1 minute on a typical laptop computer. Corresponding times to solve the full bi-level optimization problem range from tens of minutes to several hours.


winter simulation conference | 2011

SimOpt: a library of simulation optimization problems

Raghu Pasupathy; Shane G. Henderson

We present SimOpt — a library of simulation-optimization problems intended to spur development and comparison of simulation-optimization methods and algorithms. The library currently has over 50 problems that are tagged by important problem attributes such as type of decision variables, and nature of constraints. Approximately half of the problems in the library come with a downloadable simulation oracle that follows a standardized calling protocol. We also propose the idea of problem and algorithm wrappers with a view toward facilitating assessment and comparison of simulation optimization algorithms.


winter simulation conference | 2010

The initial transient in steady-state point estimation: contexts, a bibliography, the mse criterion, and the mser statistic

Raghu Pasupathy; Bruce W. Schmeiser

The initial transient is an unavoidable issue when estimating parameters of steady-state distributions. We discuss contexts and factors that affect how the initial transient is handled, provide a bibliography (from the system simulation literature), discuss criteria for evaluating initial-transient algorithms, arguing for focusing on the mean squared error (mse). We discuss the MSER statistic, showing that it is asymptotially proportional to the mse and therefore a good foundation for initial-transient algorithms. We suggest two new algorithms (MSER-LLM and MSER-LLM2) for using the MSER statistic and compare them, based on empirical results for M/M/1 and AR(1) data processes, to the original MSER algorithm (MSER-GM).


Journal of Quality Technology | 2010

Moment-Ratio Diagrams for Univariate Distributions

Erik Vargo; Raghu Pasupathy; Lawrence M. Leemis

We present two moment-ratio diagrams along with guidance for their interpretation. The first moment-ratio diagram is a graph of skewness vs. kurtosis for common univariate probability distributions. The second moment-ratio diagram is a graph of coefficient of variation vs. skewness for common univariate probability distributions. Both of these diagrams, to our knowledge, are the most comprehensive to date. The diagrams serve four purposes: (1) they quantify the proximity between various univariate distributions based on their second, third, and fourth moments, (2) they illustrate the versatility of a particular distribution based on the range of values that the various moments can assume, (3) they can be used to create a short list of potential probability models based on a data set, and (4) they clarify the limiting relationships between various well-known distribution families. The use of the moment-ratio diagrams for choosing a distribution that models given data is illustrated.


ACM Transactions on Modeling and Computer Simulation | 2013

Integer-Ordered Simulation Optimization using R-SPLINE: Retrospective Search with Piecewise-Linear Interpolation and Neighborhood Enumeration

Honggang Wang; Raghu Pasupathy; Bruce W. Schmeiser

We consider simulation-optimization (SO) models where the decision variables are integer ordered and the objective function is defined implicitly via a simulation oracle, which for any feasible solution can be called to compute a point estimate of the objective-function value. We develop R-SPLINE---a Retrospective-search algorithm that alternates between a continuous Search using Piecewise-Linear Interpolation and a discrete Neighborhood Enumeration, to asymptotically identify a local minimum. R-SPLINE appears to be among the first few gradient-based search algorithms tailored for solving integer-ordered local SO problems. In addition to proving the almost-sure convergence of R-SPLINE’s iterates to the set of local minima, we demonstrate that the probability of R-SPLINE returning a solution outside the set of true local minima decays exponentially in a certain precise sense. R-SPLINE, with no parameter tuning, compares favorably with popular existing algorithms.

Collaboration


Dive into the Raghu Pasupathy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalyani Nagaraj

Virginia Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loo Hay Lee

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge