Victoria C. P. Chen
University of Texas at Arlington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Victoria C. P. Chen.
Iie Transactions | 2006
Victoria C. P. Chen; Kwok-Leung Tsui; Russell R. Barton; Martin Meckesheimer
In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review first presents several engineering applications, it then describes approaches for the two primary tasks of metamodeling: (i) selecting an experimental design; and (ii) fitting a statistical model. Seven statistical modeling methods are included. Both classical and newer experimental designs are discussed. Finally, our own computational study tests the various metamodeling options on two two-dimensional response surfaces and one ten-dimensional surface.
International Journal of Production Research | 2001
B. M. Beamon; Victoria C. P. Chen
This research is concerned with the performance behaviour of conjoined supply chains, which typically arise in web-based retail. In particular, five performance measures belonging to three performance measure classes were used to study the performance effects of various operational factors on conjoined supply chains. The study is accomplished via experimental design and simulation analysis, and the results suggest the effects of the various factors on supply chain performance and identify the nature of the relationships among these factors and overall supply chain performance.
European Journal of Operational Research | 2006
Cristiano Cervellera; Victoria C. P. Chen; Aihong Wen
A numerical solution to a 30-dimensional water reservoir network optimization problem, based on stochastic dynamic programming, is presented. In such problems the amount of water to be released from each reservoir is chosen to minimize a nonlinear cost (or maximize benefit) function while satisfying proper constraints. Experimental results show how dimensionality issues, given by the large number of basins and realistic modeling of the stochastic inflows, can be mitigated by employing neural approximators for the value functions, and efficient discretizations of the state space, such as orthogonal arrays, Latin hypercube designs and low-discrepancy sequences.
Handbook of Statistics | 2003
Victoria C. P. Chen; Kwok-Leung Tsui; Russell R. Barton; Janet K. Allen
In this chapter, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is primarily on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review separates the two primary tasks for metamodeling: (1) select an experimental design; (2) fit a statistical model. We provide an overview of the general strategy and discuss applications in electrical engineering, chemical engineering, mechanical engineering, and dynamic programming. Then, we dedicate a section to statistical modeling methods followed by a section on experimental designs. Designs are discussed in two paradigms, model-dependent and model-independent, to emphasize their different objectives. Both classical and modern methods are discussed.
Computers & Operations Research | 2007
Cristiano Cervellera; Aihong Wen; Victoria C. P. Chen
Dynamic programming is a multi-stage optimization method that is applicable to many problems in engineering. A statistical perspective of value function approximation in high-dimensional, continuous-state stochastic dynamic programming (SDP) was first presented using orthogonal array (OA) experimental designs and multivariate adaptive regression splines (MARS). Given the popularity of artificial neural networks (ANNs) for high-dimensional modeling in engineering, this paper presents an implementation of ANNs as an alternative to MARS. Comparisons consider the differences in methodological objectives, computational complexity, model accuracy, and numerical SDP solutions. Two applications are presented: a nine-dimensional inventory forecasting problem and an eight-dimensional water reservoir problem. Both OAs and OA-based Latin hypercube experimental designs are explored, and OA space-filling quality is considered.
Communications in Statistics - Simulation and Computation | 2011
Poovich Phaladiganon; Seoung Bum Kim; Victoria C. P. Chen; Jun Geol Baek; Sun-Kyoung Park
Control charts have been used effectively for years to monitor processes and detect abnormal behaviors. However, most control charts require a specific distribution to establish their control limits. The bootstrap method is a nonparametric technique that does not rely on the assumption of a parametric distribution of the observed data. Although the bootstrap technique has been used to develop univariate control charts to monitor a single process, no effort has been made to integrate the effectiveness of the bootstrap technique with multivariate control charts. In the present study, we propose a bootstrap-based multivariate T 2 control chart that can efficiently monitor a process when the distribution of observed data is nonnormal or unknown. A simulation study was conducted to evaluate the performance of the proposed control chart and compare it with a traditional Hotellings T 2 control chart and the kernel density estimation (KDE)-based T 2 control chart. The results showed that the proposed chart performed better than the traditional T 2 control chart and performed comparably with the KDE-based T 2 control chart. Furthermore, we present a case study to demonstrate the applicability of the proposed control chart to real situations.
Journal of The Royal Statistical Society Series C-applied Statistics | 2003
Victoria C. P. Chen; Dirk Günther; Ellis L. Johnson
The yield management (YM) problem considers the task of maximizing a companys revenue. For the competitive airline industry, profit margins depend on a good YM policy. Research on airline YM is abundant but still limited to heuristics and small cases. We address the YM problem for a major domestic airline carriers hub-and-spoke network, involving 20 cities and 31 flight legs. This is a problem of realistic size since airline networks are usually separated by hub cities. Our method is a variant of the orthogonal array experimental designs and multivariate adaptive regression splines stochastic dynamic programming method. Our method is demonstrated to outperform state of the art YM methods. Copyright 2003 Royal Statistical Society.
Siam Journal on Optimization | 2002
Victoria C. P. Chen
This paper describes a continuous space discretization scheme based on statistical experimental designs generated from orthogonal arrays (OAs) of strength three with index unity. Chen, Ruppert, and Shoemaker [Oper. Res., 47 (1999), pp. 38--53] employed this efficient discretization scheme in a numerical solution method for high-dimensional continuous-state stochastic dynamic programming (SDP). These OAs may be instrumental in reducing the dimensionality of event spaces, SDP state spaces, and first-stage decision spaces in two-stage stochastic programming. In particular, computationally efficient space-filling measures for these OAs are derived for evaluating how well a specific OA discretization fills the state space. Comparisons were made with two types of common measures: ones which maximize the average (or minimum) distance between discretization points within the OA and ones which minimize the average (or maximum) distance between discretization points and nondiscretization points lying on a full grid (i.e., points lying on the full grid that are not contained in the OA discretization). OAs of strength three were tested by fitting multivariate adaptive regression splines to data from an inventory-forecasting continuous-state stochastic dynamic program.
Bioprocess Engineering | 2000
Victoria C. P. Chen; Derrick K. Rollins
Abstract In recent years researchers in many areas have used artificial neural networks (ANNs) to model a variety of physical relationships. While in many cases this selection appears sound and reasonable, one must remember than ANN modeling is an empirical modeling technique (based on data) and is subject to the limitations of such techniques. Poor prediction occurs when the training data set does not contain adequate “information” to model a dynamic process. Using data from a simulated continuous-stirred tank reactor, this paper illustrates four scenarios: (1) steady state, (2) large process time constant, (3) infrequent sampling, and (4) variable sampling rate. The first scenario is typical of simulation studies while the other three incorporate attributes found in real plant data. For the cases in which ANNs predicted well, linear regression (LR), one of the oldest empirical modeling techniques, predicted equally well, and when LR failed to accurately model/predict the data, ANNs predicted poorly. Since real plant data would resemble a combination of situations (2), (3), and (4), it is important to understand that empirical models are not necessarily appropriate for predictively modeling dynamic processes in practice.
Iie Transactions | 2008
Venkata L. Pilla; Jay M. Rosenberger; Victoria C. P. Chen; Barry C. Smith
The fleet assignment model allocates a fleet of aircraft to scheduled flight legs in an airline timetable. The fleet assignment model addressed in this paper uses a two-stage stochastic programming framework along with the Boeing concept of demand driven dispatch to assign crew compatible aircraft in the first stage, so as to enhance the demand capturing potential of swapping in the second stage. A design and analysis of computer experiments approach is used to reduce the computation involved in solving the problem. The main contribution of this paper is a method to obtain an approximation for the expected profit function using a regression splines fit, generated over a Latin hypercube design. The results on the accuracy of the fit for a real airline carrier are presented and future work is discussed.