L. Jeff Hong
City University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by L. Jeff Hong.
Operations Research | 2011
L. Jeff Hong; Yi Yang; Liwei Zhang
When there is parameter uncertainty in the constraints of a convex optimization problem, it is natural to formulate the problem as a joint chance constrained program (JCCP), which requires that all constraints be satisfied simultaneously with a given large probability. In this paper, we propose to solve the JCCP by a sequence of convex approximations. We show that the solutions of the sequence of approximations converge to a Karush-Kuhn-Tucker (KKT) point of the JCCP under a certain asymptotic regime. Furthermore, we propose to use a gradient-based Monte Carlo method to solve the sequence of convex approximations.
Operations Research | 2009
L. Jeff Hong
Quantiles of a random performance serve as important alternatives to the usual expected value. They are used in the financial industry as measures of risk and in the service industry as measures of service quality. To manage the quantile of a performance, we need to know how changes in the input parameters affect the output quantiles, which are called quantile sensitivities. In this paper, we show that the quantile sensitivities can be written in the form of conditional expectations. Based on the conditional-expectation form, we first propose an infinitesimal-perturbation-analysis (IPA) estimator. The IPA estimator is asymptotically unbiased, but it is not consistent. We then obtain a consistent estimator by dividing data into batches and averaging the IPA estimates of all batches. The estimator satisfies a central limit theorem for the i.i.d. data, and the rate of convergence is strictly slower than n-1/3. The numerical results show that the estimator works well for practical problems.
Iie Transactions | 2005
L. Jeff Hong; Barry L. Nelson
Statistical Ranking and Selection (R&S) is a collection of experiment design and analysis techniques for selecting the “population” with the largest or smallest mean performance from among a finite set of alternatives. R&S procedures have received considerable research attention in the stochastic simulation community, and they have been incorporated in commercial simulation software. One of the ways that R&S procedures are evaluated and compared is via the expected number of samples (often replications) that must be generated to reach a decision. In this paper we argue that sampling cost alone does not adequately characterize the efficiency of ranking-and-selection procedures, and the cost of switching among the simulations of the alternative systems should also be considered. We introduce two new, adaptive procedures, the minimum switching sequential procedure and the multi-stage sequential procedure with tradeoff, that provide the same statistical guarantees as existing procedures and significantly reduce the expected total computational cost of application, especially when applied to favorable configurations of the competing means.
Informs Journal on Computing | 2013
Kuo-Hao Chang; L. Jeff Hong; Hong Wan
Response surface methodology RSM is a widely used method for simulation optimization. Its strategy is to explore small subregions of the decision space in succession instead of attempting to explore the entire decision space in a single attempt. This method is especially suitable for complex stochastic systems where little knowledge is available. Although RSM is popular in practice, its current applications in simulation optimization treat simulation experiments the same as real experiments. However, the unique properties of simulation experiments make traditional RSM inappropriate in two important aspects: 1 It is not automated; human involvement is required at each step of the search process; 2 RSM is a heuristic procedure without convergence guarantee; the quality of the final solution cannot be quantified. We propose the stochastic trust-region response-surface method STRONG for simulation optimization in attempts to solve these problems. STRONG combines RSM with the classic trust-region method developed for deterministic optimization to eliminate the need for human intervention and to achieve the desired convergence properties. The numerical study shows that STRONG can outperform the existing methodologies, especially for problems that have grossly noisy response surfaces, and its computational advantage becomes more obvious when the dimension of the problem increases.
Iie Transactions | 2007
L. Jeff Hong; Barry L. Nelson
Statistical Ranking and Selection (R&S) is a collection of experiment design and analysis techniques for selecting the system with the largest or smallest mean performance from among a finite set of alternatives. R&S procedures have received considerable research attention in the stochastic simulation community, and they have been incorporated in commercial simulation software. All existing procedures assume that the set of alternatives is available at the beginning of the experiment. In many situations, however, the alternatives are revealed (generated) sequentially during the experiment. We introduce procedures that are capable of selecting the best alternative in these situations and provide the desired statistical guarantees.
ACM Transactions on Modeling and Computer Simulation | 2007
L. Jeff Hong; Barry L. Nelson
The goal of this article is to provide a general framework for locally convergent random-search algorithms for stochastic optimization problems when the objective function is embedded in a stochastic simulation and the decision variables are integer ordered. The framework guarantees desirable asymptotic properties, including almost-sure convergence and known rate of convergence, for any algorithms that conform to its mild conditions. Within this framework, algorithm designers can incorporate sophisticated search schemes and complicated statistical procedures to design new algorithms.
Informs Journal on Computing | 2013
Jie Xu; Barry L. Nelson; L. Jeff Hong
We propose an adaptive hyperbox algorithm AHA, which is an instance of a locally convergent, random search algorithm for solving discrete optimization via simulation problems. Compared to the COMPASS algorithm, AHA is more efficient in high-dimensional problems. By analyzing models of the behavior of COMPASS and AHA, we show why COMPASS slows down significantly as dimension increases, whereas AHA is less affected. Both AHA and COMPASS can be used as the local search algorithm within the Industrial Strength COMPASS framework, which consists of a global search phase, a local search phase, and a final cleanup phase. We compare the performance of AHA to COMPASS within the framework of Industrial Strength COMPASS and as stand-alone algorithms. Numerical experiments demonstrate that AHA scales up well in high-dimensional problems and has similar performance to COMPASS in low-dimensional problems.
Management Science | 2009
Michael C. Fu; L. Jeff Hong; Jian-Qiang Hu
Estimating quantile sensitivities is important in many optimization applications, from hedging in financial engineering to service-level constraints in inventory control to more general chance constraints in stochastic programming. Recently, Hong (Hong, L. J. 2009. Estimating quantile sensitivities. Oper. Res.57 118--130) derived a batched infinitesimal perturbation analysis estimator for quantile sensitivities, and Liu and Hong (Liu, G., L. J. Hong. 2009. Kernel estimation of quantile sensitivities. Naval Res. Logist.56 511--525) derived a kernel estimator. Both of these estimators are consistent with convergence rates bounded by n-1/3 and n-2/5, respectively. In this paper, we use conditional Monte Carlo to derive a consistent quantile sensitivity estimator that improves upon these convergence rates and requires no batching or binning. We illustrate the new estimator using a simple but realistic portfolio credit risk example, for which the previous work is inapplicable.
Management Science | 2012
Zhaolin Hu; Jing Cao; L. Jeff Hong
Integrated assessment models that combine geophysics and economics features are often used to evaluate and compare global warming policies. Because there are typically profound uncertainties in these models, a simulation approach is often used. This approach requires the distribution of the uncertain parameters clearly specified. However, this is typically impossible because there is often a significant amount of ambiguity (e.g., estimation error) in specifying the distribution. In this paper, we adopt the widely used multivariate normal distribution to model the uncertain parameters. However, we assume that the mean vector and covariance matrix of the distribution are within some ambiguity sets. We then show how to find the worst-case performance of a given policy for all distributions constrained by the ambiguity sets. This worst-case performance provides a robust evaluation of the policy. We test our algorithm on a famous integrated model of climate change, known as the Dynamic Integrated Model of Climate and the Economy (DICE model). We find that the DICE model is sensitive to the means and covariance of the parameters. Furthermore, we find that, based on the DICE model, moderately tight environmental policies robustly outperform the no controls policy and the famous aggressive policies proposed by Stern and Gore. This paper was accepted by Dimitris Bertsimas, optimization.
Operations Research | 2011
Guangwu Liu; L. Jeff Hong
The Greeks are the derivatives (also known as sensitivities) of the option prices with respect to market parameters. They play an important role in financial risk management. Among many Monte Carlo methods of estimating the Greeks, the classical pathwise method requires only the pathwise information that is directly observable from simulation and is generally easier to implement than many other methods. However, the classical pathwise method is generally not applicable to the Greeks of options with discontinuous payoffs and the second-order Greeks. In this paper, we generalize the classical pathwise method to allow discontinuity in the payoffs. We show how to apply the new pathwise method to the first-and second-order Greeks and propose kernel estimators that require little analytical efforts and are very easy to implement. The numerical results show that our estimators work well for practical problems.