Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christos Alexopoulos is active.

Publication


Featured researches published by Christos Alexopoulos.


IEEE Transactions on Reliability | 1995

A note on state-space decomposition methods for analyzing stochastic flow networks

Christos Alexopoulos

Consider a flow network with single source s and single sink t with demand d>0. Assume that the nodes do not restrict flow transmission and the arcs have finite random discrete capacities. This paper has two objectives: (1) it corrects errors in well-known algorithms by Doulliez and Jamoulle (1972) for (a) computing the probability that the demand is satisfied (or network reliability), (b) the probability that an arc belongs to a minimum cut which limits the flow below d, and (c) the probability that a cut limits the flow below d; and (2) it discusses the applicability of these procedures. The Doulliez and Jamoulle algorithms are frequently referenced or used by researchers in the areas of power and communication systems and appear to be very effective for the computation of the network reliability when the demand is close to the largest possible maximum flow value. Extensive testing is required before the Doulliez and Jamoulle algorithms are disposed in favor of alternative approaches. Such testing should compare the performance of existing methods in a variety of networks including grid networks and dense networks of various sizes. >


systems man and cybernetics | 1992

Path planning for a mobile robot

Christos Alexopoulos; Paul M. Griffin

Two problems for path planning of a mobile robot are considered. The first problem is to find a shortest-time, collision-free path for the robot in the presence of stationary obstacles in two dimensions. The second problem is to determine a collision-free path (greedy in time) for a mobile robot in an environment of moving obstacles. The environment is modeled in space-time and the collision-free path is found by a variation of the A* algorithm. >


ACM Transactions on Modeling and Computer Simulation | 2005

ASAP3: a batch means procedure for steady-state simulation analysis

Natalie M. Steiger; Emily K. Lada; James R. Wilson; Jeffrey A. Joines; Christos Alexopoulos; David Goldsman

We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a steady-state simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.


ACM Transactions on Modeling and Computer Simulation | 2004

To batch or not to batch

Christos Alexopoulos; David Goldsman

When designing steady-state computer simulation experiments, one may be faced with the choice of batching observations in one long run or replicating a number of smaller runs. Both methods are potentially useful in the course of undertaking simulation output analysis. The tradeoffs between the two alternatives are well known: batching ameliorates the effects of initialization bias, but produces batch means that might be correlated; replication yields independent sample means, but may suffer from initialization bias at the beginning of each of the runs. We present several new results and specific examples to lend insight as to when one method might be preferred over the other. In steady-state, batching and replication perform similarly in terms of estimating the mean and variance parameter, but replication tends to do better than batching with regard to the performance of confidence intervals for the mean. Such a victory for replication may be hollow, for in the presence of an initial transient, batching often performs better than replication when it comes to point and confidence-interval estimation of the steady-state mean. We conclude---like other classic references---that in the context of estimation of the steady-state mean, batching is typically the wiser approach.


Iie Transactions | 2007

A distribution-free tabular CUSUM chart for autocorrelated data

Seong-Hee Kim; Christos Alexopoulos; Kwok-Leung Tsui; James R. Wilson

A distribution-free tabular CUSUM chart called DFTC is designed to detect shifts in the mean of an autocorrelated process. The charts Average Run Length (ARL) is approximated by generalizing Siegmunds ARL approximation for the conventional tabular CUSUM chart based on independent and identically distributed normal observations. Control limits for DFTC are computed from the generalized ARL approximation. Also discussed are the choice of reference value and the use of batch means to handle highly correlated processes. The performance of DFTC compared favorably with that of other distribution-free procedures in stationary test processes having various types of autocorrelation functions as well as normal or nonnormal marginals.


Operations Research | 2007

Overlapping Variance Estimators for Simulation

Christos Alexopoulos; Nilay Tanik Argon; David Goldsman; Gamze Tokol; James R. Wilson

To estimate the variance parameter (i.e., the sum of covariances at all lags) for a steady-state simulation output process, we formulate certain statistics that are computed from overlapping batches separately and then averaged over all such batches. We form overlapping versions of the area and Cramer--von Mises estimators using the method of standardized time series. For these estimators, we establish (i) their limiting distributions as the sample size increases while the ratio of the sample size to the batch size remains fixed; and (ii) their mean-square convergence to the variance parameter as both the batch size and the ratio of the sample size to the batch size increase. Compared with their counterparts computed from nonoverlapping batches, the estimators computed from overlapping batches asymptotically achieve reduced variance while maintaining the same bias as the sample size increases; moreover, the new variance estimators usually achieve similar improvements compared with the conventional variance estimators based on nonoverlapping or overlapping batch means. In follow-up work, we present several analytical and Monte Carlo examples, and we formulate efficient procedures for computing the overlapping estimators with only order-of-sample-size effort.


Informs Journal on Computing | 2007

Efficient Computation of Overlapping Variance Estimators for Simulation

Christos Alexopoulos; Nilay Tanik Argon; David Goldsman; Natalie M. Steiger; Gamze Tokol; James R. Wilson

For a steady-state simulation output process, we formulate efficient algorithms to compute certain estimators of the process variance parameter (i.e., the sum of covariances at all lags), where the estimators are derived in principle from overlapping batches separately and then averaged over all such batches. The algorithms require order-of-sample-size work to evaluate overlapping versions of the area and Cramer--von Mises estimators arising in the method of standardized time series. Recently, Alexopoulos et al. showed that, compared with estimators based on nonoverlapping batches, the estimators based on overlapping batches achieve reduced variance while maintaining similar bias asymptotically as the batch size increases. We provide illustrative analytical and Monte Carlo results for M/M/1 queue waiting times and for a first-order autoregressive process. We also present evidence that the asymptotic distribution of each overlapping variance estimator can be closely approximated using an appropriately rescaled chi-squared random variable with matching mean and variance.


systems man and cybernetics | 1989

Point pattern matching using centroid bounding

Paul M. Griffin; Christos Alexopoulos

A method is presented for matching point patterns A and B in the plane where mod A mod = mod B mod =N. The method is capable of determining the registration and matching and is not affected by translation, rotation, scaling or noise. An example application of the algorithm to the matching of flat parts is shown. The algorithm not only allows part recognition, but may also be extended to perform automated verification of the dimension of the part. >


winter simulation conference | 2006

A comprehensive review of methods for simulation output analysis

Christos Alexopoulos

This paper reviews statistical methods for analyzing output data from computer simulations. Specifically, it focuses on the estimation of steady-state system parameters. The estimation techniques include the replication/deletion approach, the regenerative method, the batch means method, and methods based on standardized time series


winter simulation conference | 2002

ASAP2: an improved batch means procedure for simulation output analysis

Natalie M. Steiger; Emily K. Lada; James R. Wilson; Christos Alexopoulos; David Goldsman; Faker Zouaoui

We introduce ASAP2, an improved variant of the batch-means algorithm ASAP for steady-state simulation output analysis. ASAP2 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP2 delivers a correlation-adjusted confidence interval. The latter adjustment is based on an inverted Cornish-Fisher expansion for the classical batch means t-ratio, where the terms of the expansion are estimated via a first-order autoregressive time series model of the batch means. ASAP2 is a sequential procedure designed to deliver a confidence interval that satisfies a prespecified absolute or relative precision requirement. When used in this way, ASAP2 compares favorably to ASAP and the well-known procedures ABATCH and LBATCH with respect to close conformance to the precision requirement as well as coverage probability and mean and variance of the half-length of the final confidence interval.

Collaboration


Dive into the Christos Alexopoulos's collaboration.

Top Co-Authors

Avatar

David Goldsman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James R. Wilson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Nilay Tanik Argon

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

George S. Fishman

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard M. Fujimoto

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Seong-Hee Kim

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Kopald

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael Hunter

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge