Henry Lam
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Henry Lam.
IEEE Transactions on Learning Technologies | 2014
Christopher G. Brinton; Mung Chiang; Shaili Jain; Henry Lam; Zhenming Liu; Felix Ming Fai Wong
We study user behavior in the courses offered by a major massive online open course (MOOC) provider during the summer of 2013. Since social learning is a key element of scalable education on MOOC and is done via online discussion forums, our main focus is on understanding forum activities. Two salient features of these activities drive our research: (1) high decline rate: for each course studied, the volume of discussion declined continuously throughout the duration of the course; (2) high-volume, noisy discussions: at least 30 percent of the courses produced new threads at rates that are infeasible for students or teaching staff to read through. Further, a substantial portion of these discussions are not directly course-related. In our analysis, we investigate factors that are associated with the decline of activity on MOOC forums, and we find effective strategies to classify threads and rank their relevance. Specifically, we first use linear regression models to analyze the forum activity count data over time, and make a number of observations; for instance, the teaching staffs active participation in the discussions is correlated with an increase in the discussion volume but does not slow down the decline rate. We then propose a unified generative model for the discussion threads, which allows us both to choose efficient thread classifiers and to design an effective algorithm for ranking thread relevance. Further, our algorithm is compared against two baselines using human evaluation from Amazon Mechanical Turk.
Mathematics of Operations Research | 2016
Henry Lam
We study a worst-case approach to measure the sensitivity to model misspecification in the performance analysis of stochastic systems. The situation of interest is when only minimal parametric information is available on the form of the true model. Under this setting, we post optimization programs that compute the worst-case performance measures, subject to constraints on the amount of model misspecification measured by Kullback-Leibler divergence. Our main contribution is the development of infinitesimal approximations for these programs, resulting in asymptotic expansions of their optimal values as the divergence shrinks to zero. The coefficients of these expansions can be computed via simulation, and are mathematically derived from the representation of the worst-case models as changes of measure that satisfy a well-defined class of functional fixed-point equations.
symposium on theoretical aspects of computer science | 2012
Kai-Min Chung; Henry Lam; Zhenming Liu; Michael Mitzenmacher
We prove the first Chernoff-Hoeffding bounds for general nonreversible finite-state Markov chains based on the standard L_1 (variation distance) mixing-time of the chain. Specifically, consider an ergodic Markov chain M and a weight function f: [n] -> [0,1] on the state space [n] of M with mean mu = E_{v = delta mu t ], is at most exp(-Omega(delta^2 mu t / T)) for 0 1. In fact, the bounds hold even if the weight functions f_is for i in [t] are distinct, provided that all of them have the same mean mu. We also obtain a simplified proof for the Chernoff-Hoeffding bounds based on the spectral expansion lambda of M, which is the square root of the second largest eigenvalue (in absolute value) of M tilde{M}, where tilde{M} is the time-reversal Markov chain of M. We show that the probability Pr [ |X - mu t| >= delta mu t ] is at most exp(-Omega(delta^2 (1-lambda) mu t)) for 0 1. Both of our results extend to continuous-time Markov chains, and to the case where the walk starts from an arbitrary distribution x, at a price of a multiplicative factor depending on the distribution x in the concentration bounds.
IEEE Transactions on Intelligent Transportation Systems | 2018
Ding Zhao; Xianan Huang; Huei Peng; Henry Lam; David J. LeBlanc
The safety of automated vehicles (AVs) must be assured before their release and deployment. The current approach to evaluation relies primarily on 1) testing AVs on public roads or 2) track testing with scenarios defined in a test matrix. These two methods have completely opposing drawbacks: the former, while offering realistic scenarios, takes too much time to execute and the latter, though it can be completed in a short amount of time, has no clear correlation to safety benefits in the real world. To avoid the aforementioned problems, we propose accelerated evaluation, focusing on the car-following scenario. The stochastic human-controlled vehicle (HV) motions are modeled based on 1.3 million miles of naturalistic driving data collected by the University of Michigan Safety Pilot Model Deployment Program. The statistics of the HV behaviors are then modified to generate more intense interactions between HVs and AVs to accelerate the evaluation procedure. The importance sampling theory was used to ensure that the safety benefits of AVs are accurately assessed under accelerated tests. Crash, injury and conflict rates for a simulated AV are simulated to demonstrate the proposed approach. Results show that test duration is reduced by a factor of 300 to 100 000 compared with the non-accelerated (naturalistic) evaluation. In other words, the proposed techniques have great potential for accelerating the AV evaluation process.
winter simulation conference | 2011
Jose H. Blanchet; Henry Lam
We discuss rare event simulation techniques based on state-dependent importance sampling. Classical examples and counter-examples are shown to illustrate the reach and limitations of the state-independent approach. State-dependent techniques are helpful to deal with these limitations. These techniques can be applied to both light and heavy tailed systems and often are based on subsolutions to an associated Isaacs equation and on Lyapunov bounds.
IEEE Transactions on Intelligent Transportation Systems | 2018
Zhiyuan Huang; Henry Lam; David J. LeBlanc; Ding Zhao
The process to certify highly automated vehicles has not yet been defined by any country in the world. Currently, companies test automated vehicles on public roads, which is time-consuming and inefficient. We proposed the accelerated evaluation concept, which uses a modified statistics of the surrounding vehicles and the importance sampling theory to reduce the evaluation time by several orders of magnitude, while ensuring the evaluation results are statistically accurate. In this paper, we further improve the accelerated evaluation concept by using piecewise mixture distribution models, instead of single parametric distribution models. We developed and applied this idea to forward collision control system reacting to vehicles making cutin lane changes. The behavior of the cutin vehicles was modeled based on more than 403,581 lane changes collected by the University of Michigan Safety Pilot Model Deployment Program. Simulation results confirm that the accuracy and efficiency of the piecewise mixture distribution method outperformed single parametric distribution methods in accuracy and efficiency, and accelerated the evaluation process by almost four orders of magnitude.
winter simulation conference | 2015
Henry Lam; Enlu Zhou
We consider stochastic optimization problems in which the input probability distribution is not fully known, and can only be observed through data. Common procedures handle such problems by optimizing an empirical counterpart, namely via using an empirical distribution of the input. The optimal solutions obtained through such procedures are hence subject to uncertainty of the data. In this paper, we explore techniques to quantify this uncertainty that have potentially good finite-sample performance. We consider three approaches: the empirical likelihood method, nonparametric Bayesian approach, and the bootstrap approach. They are designed to approximate the confidence intervals or posterior distributions of the optimal values or the optimality gaps. We present computational procedures for each of the approaches and discuss their relative benefits. A numerical example on conditional value-at-risk is used to demonstrate these methods.
Mathematics of Operations Research | 2014
Jose H. Blanchet; Henry Lam
We develop rare-event simulation methodology for the analysis of loss events in a many-server loss system under the quality-driven regime, focusing on the steady-state loss probability (i.e., fraction of lost customers over arrivals) and the behavior of the whole system leading to loss events. The analysis of these events requires working with the full measure-valued process describing the system. This is the first algorithm that is shown to be asymptotically optimal, in the rare-event simulation context, under the setting of many-server queues involving a full measure-valued representation.
Management Science | 2017
Henry Lam
Procedures in assessing the impact of serial dependency on performance analysis are usually built on parametrically specified models. In this paper, we propose a robust, nonparametric approach to carry out this assessment, by computing the worst-case deviation of the performance measure due to arbitrary dependence. The approach is based on optimizations, posited on the model space, that have constraints specifying the level of dependency measured by a nonparametric distance to some nominal i.i.d. input model. We study approximation methods for these optimizations via simulation and analysis-of-variance (ANOVA). Numerical experiments demonstrate how the proposed approach can discover the hidden impacts of dependency beyond those revealed by conventional parametric modeling and correlation studies.
winter simulation conference | 2015
Soumyadip Ghosh; Henry Lam
Performance analysis via stochastic simulation is often subject to input model uncertainty, meaning that the input model is unknown and needs to be inferred from data. Motivated especially from situations with limited data, we consider a worst-case analysis to handle input uncertainty by representing the partially available input information as constraints and solving a worst-case optimization problem to obtain a conservative bound for the output. In the context of i.i.d. input processes, such approach involves simulation-based nonlinear optimizations with decision variables being probability distributions. We explore the use of a specialized class of mirror descent stochastic approximation (MDSA) known as the entropic descent algorithm, particularly effective for handling probability simplex constraints, to iteratively solve for the local optima. We show how the mathematical program associated with each iteration of the MDSA algorithm can be efficiently computed, and carry out numerical experiments to illustrate the performance of the algorithm.