Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Liu Hong is active.

Publication


Featured researches published by Jeff Liu Hong.


Operations Research | 2006

Discrete Optimization via Simulation Using COMPASS

Jeff Liu Hong; Barry L. Nelson

We propose an optimization-via-simulation algorithm, called COMPASS, for use when the performance measure is estimated via a stochastic, discrete-event simulation, and the decision variables are integer ordered. We prove that COMPASS converges to the set of local optimal solutions with probability 1 for both terminating and steady-state simulation, and for both fully constrained problems and partially constrained or unconstrained problems under mild conditions.


ACM Transactions on Modeling and Computer Simulation | 2010

Industrial strength COMPASS: A comprehensive algorithm and software for optimization via simulation

Jie Xu; Barry L. Nelson; Jeff Liu Hong

Industrial Strength COMPASS (ISC) is a particular implementation of a general framework for optimizing the expected value of a performance measure of a stochastic simulation with respect to integer-ordered decision variables in a finite (but typically large) feasible region defined by linear-integer constraints. The framework consists of a global-search phase, followed by a local-search phase, and ending with a “clean-up” (selection of the best) phase. Each phase provides a probability 1 convergence guarantee as the simulation effort increases without bound: Convergence to a globally optimal solution in the global-search phase; convergence to a locally optimal solution in the local-search phase; and convergence to the best of a small number of good solutions in the clean-up phase. In practice, ISC stops short of such convergence by applying an improvement-based transition rule from the global phase to the local phase; a statistical test of convergence from the local phase to the clean-up phase; and a ranking-and-selection procedure to terminate the clean-up phase. Small-sample validity of the statistical test and ranking-and-selection procedure is proven for normally distributed data. ISC is compared to the commercial optimization via simulation package OptQuest on five test problems that range from 2 to 20 decision variables and on the order of 104 to 1020 feasible solutions. These test cases represent response-surface models with known properties and realistic system simulation problems.


winter simulation conference | 2009

A brief introduction to optimization via simulation

Jeff Liu Hong; Barry L. Nelson

Optimization via simulation (OvS) is an exciting and fast developing area for both research and practice. In this article, we introduce three types of OvS problems: the R&S problems, the continuous OvS problems and the discrete OvS problems, and discuss the issues and current research development for these problems. We also give some suggestions on how to use commercial OvS software in practice.


Management Science | 2009

Simulating Sensitivities of Conditional Value at Risk

Jeff Liu Hong; Guangwu Liu

Conditional value at risk (CVaR) is both a coherent risk measure and a natural risk statistic. It is often used to measure the risk associated with large losses. In this paper, we study how to estimate the sensitivities of CVaR using Monte Carlo simulation. We first prove that the CVaR sensitivity can be written as a conditional expectation for general loss distributions. We then propose an estimator of the CVaR sensitivity and analyze its asymptotic properties. The numerical results show that the estimator works well. Furthermore, we demonstrate how to use the estimator to solve optimization problems with CVaR objective and/or constraints, and compare it to a popular linear programming-based algorithm.


European Journal of Operational Research | 2006

A sequential procedure for neighborhood selection-of-the-best in optimization via simulation

Juta Pichitlamken; Barry L. Nelson; Jeff Liu Hong

We propose a fully sequential indifference-zone selection procedure that is specifically for use within an optimization-via-simulation algorithm when simulation is costly, and partial or complete information on solutions previously visited is maintained. Sequential Selection with Memory guarantees to select the best or near-best alternative with a user-specified probability when some solutions have already been sampled, their previous samples are retained, and simulation outputs are i.i.d. normal. For the case when only summary information on solutions is retained, we derive a modified procedure. We illustrate how our procedures can be applied to optimization-via-simulation problems and compare its performance with other methods by numerical examples.


Annals of Operations Research | 2012

Fighting Strategies in a Market with Counterfeits

Jie Zhang; Jeff Liu Hong; Rachel Q. Zhang

Counterfeiting is a widely spread phenomenon and has seen rapid growth in recent years. In this paper, we adopt the standard vertical differentiation model and allow consumers the choices of purchasing an authentic product, purchasing a counterfeit, or not buying. We focus on how non-deceptive counterfeits, which consumers know at time of purchase that the products are counterfeits with certainty, affect the price, market share and profitability of brand name products. We also consider the strategies for brand name companies to fight counterfeiting. We compare different fighting strategies in a market with one brand name product and its counterfeit, and derive equilibrium fighting strategies in a market with two competing brand name products and a counterfeit under general conditions.


ACM Transactions on Modeling and Computer Simulation | 2014

Monte Carlo Methods for Value-at-Risk and Conditional Value-at-Risk: A Review

Jeff Liu Hong; Zhaolin Hu; Guangwu Liu

Value-at-risk (VaR) and conditional value-at-risk (CVaR) are two widely used risk measures of large losses and are employed in the financial industry for risk management purposes. In practice, loss distributions typically do not have closed-form expressions, but they can often be simulated (i.e., random observations of the loss distribution may be obtained by running a computer program). Therefore, Monte Carlo methods that design simulation experiments and utilize simulated observations are often employed in estimation, sensitivity analysis, and optimization of VaRs and CVaRs. In this article, we review some of the recent developments in these methods, provide a unified framework to understand them, and discuss their applications in financial risk management.


winter simulation conference | 2007

Stochastic trust region gradient-free method (strong): a new response-surface-based algorithm in simulation optimization

Kuo-Hao Chang; Jeff Liu Hong; Hong Wan

Response Surface Methodology (RSM) is a metamodel- based optimization method. Its strategy is to explore small subregions of the parameter space in succession instead of attempting to explore the entire parameter space directly. This method has been widely used in simulation optimization. However, RSM has two significant shortcomings: Firstly, it is not automated. Human involvements are usually required in the search process. Secondly, RSM is heuristic without convergence guarantee. This paper proposes Stochastic Trust Region Gradient-Free Method (STRONG) for simulation optimization with continuous decision variables to solve these two problems. STRONG combines the traditional RSM framework with the trust region method for deterministic optimization to achieve convergence property and eliminate the requirement of human involvement. Combined with appropriate experimental designs and specifically efficient screening experiments, STRONG has the potential of solving high-dimensional problems efficiently.


Operations Research Letters | 2010

Speeding up COMPASS for high-dimensional discrete optimization via simulation

Jeff Liu Hong; Barry L. Nelson; Jie Xu

The convergent optimization via most promising area stochastic search (COMPASS) algorithm is a locally convergent random search algorithm for solving discrete optimization via simulation problems. COMPASS has drawn a significant amount of attention since its introduction. While the asymptotic convergence of COMPASS does not depend on the problem dimension, the finite-time performance of the algorithm often deteriorates as the dimension increases. In this paper, we investigate the reasons for this deterioration and propose a simple change to the solution-sampling scheme that significantly speeds up COMPASS for high-dimensional problems without affecting its convergence guarantee.


Operations Research | 2014

Balancing Exploitation and Exploration in Discrete Optimization via Simulation Through a Gaussian Process-Based Search

Lihua Sun; Jeff Liu Hong; Zhaolin Hu

Random search algorithms are often used to solve discrete optimization-via-simulation DOvS problems. The most critical component of a random search algorithm is the sampling distribution that is used to guide the allocation of the search effort. A good sampling distribution can balance the trade-off between the effort used in searching around the current best solution which is called exploitation and the effort used in searching largely unknown regions which is called exploration. However, most of the random search algorithms for DOvS problems have difficulties in balancing this trade-off in a seamless way. In this paper we propose a new scheme that derives a sampling distribution from a fast fitted Gaussian process based on previously evaluated solutions. We show that the sampling distribution has the desired properties and can automatically balance the exploitation and exploration trade-off. Furthermore, we integrate this sampling distribution into a random research algorithm, called a Gaussian process-based search GPS and show that the GPS algorithm has the desired global convergence as the simulation effort goes to infinity. We illustrate the properties of the algorithm through a number of numerical experiments.

Collaboration


Dive into the Jeff Liu Hong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guangwu Liu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Rachel Q. Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Zhang

Guangdong University of Business Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandeep Juneja

Tata Institute of Fundamental Research

View shared research outputs
Top Co-Authors

Avatar

Jie Xu

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Jiheng Zhang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jin Fang

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge