Dashi I. Singham
Naval Postgraduate School
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dashi I. Singham.
winter simulation conference | 2010
Leslie Esher; Stacey Hall; Eva Regnier; Paul J. Sanchez; James A. Hansen; Dashi I. Singham
Recent years have seen an upsurge in piracy, particularly off the Horn of Africa. Piracy differs from other asymmetric threats, such as terrorism, in that it is economically motivated. Pirates operating off East Africa have threatened maritime safety and cost commercial shipping billions of dollars paid in ransom. Piracy in this region is conducted from small boats which can only survive for a few days away from their base of operations, have limited survival in severe weather, and cannot perform boarding operations in high wind or sea state conditions. In this study we use agent models and statistical design of experiments to gain insight into how meteorological and oceanographic forecasts can used to dynamically predict relative risks for commercial shipping.
international conference on conceptual structures | 2012
Joshua A. White; Dashi I. Singham
Abstract Many regions around the world are vulnerable to rainfall-induced landslides and debris flows. A variety of methods, from simple analytical approximations to sophisticated numerical methods, have been proposed over the years for capturing the relevant physics leading to landslide initiation. A key shortcoming of current hazard analysis techniques, however, is that they typically rely on a single historical rainfall record as input to the hydromechanical analysis. Unfortunately, the use of a single record ignores the inherently stochastic nature of the rainfall process. In this work, we employ a Markov chain model to generate many realizations of rainfall time series given a measured historical record. We then use these simulated realizations to drive several hundred finite element simulations of subsurface infiltration and collapse. The resulting slope-stability analysis provides an opportunity to assess the inherent distribution of failure statistics, and provides a much more complete picture of slope behavior.
winter simulation conference | 2010
Lee W. Schruben; Dashi I. Singham
Notions from agent based modeling (ABM) can be used to simulate multivariate time series. An example is given using the ABM concept of flocking, which models the behaviors of birds (called boids) in a flock. A multivariate time series is mapped into the coordinates of a bounded orthotope. This represents the flight path of a boid. Other boids are generated that flock around this data boid. The coordinates of these new boids are mapped back to simulate replicates of the original time series. The flock size determines the number of replicates. The similarity of the replicates to the original time series can be controlled by flocking parameters to reflect the strength of the belief that the future will mimic the past. It is potentially possible to replicate general non-stationary, dependent, high-dimensional time series in this manner.
ACM Transactions on Modeling and Computer Simulation | 2014
Lee W. Schruben; Dashi I. Singham
This article introduces a new framework for resampling general time series data. The approach, inspired by computer agent flocking algorithms, can be used to generate inputs to complex simulation models or for generating pseudo-replications of expensive simulation outputs. The method has the flexibility to enable replicated sensitivity analysis for trace-driven simulation, which is critical for risk assessment. The article includes two simple implementations to illustrate the approach. These implementations are applied to nonstationary and state-dependent multivariate time series. Examples using emergency department data are presented.
Informs Journal on Computing | 2012
Dashi I. Singham; Lee W. Schruben
Absolute precision stopping rules are often used to determine the length of sequential experiments to estimate confidence intervals for simulated performance measures. Much is known about the asymptotic behavior of such procedures. In this paper, we introduce coverage contours to quantify the trade-offs in interval coverage, stopping times, and precision for finite-sample experiments using absolute precision rules. We use these contours to evaluate the coverage of a basic absolute precision stopping rule, and we show that this rule will lead to a bias in coverage even if all of the assumptions supporting the procedure are true. We define optimal stopping rules that deliver nominal coverage with the smallest expected number of observations. Contrary to previous asymptotic results that suggest decreasing the precision of the rule to approach nominal coverage in the limit, we find that it is optimal to increase the confidence coefficient used in the stopping rule, thus obtaining nominal coverage in a finite-sample experiment. If the simulation data are independent and identically normally distributed, we can calculate coverage contours analytically and find a stopping rule that is insensitive to the variance of the data while delivering at least nominal coverage for any precision value.
winter simulation conference | 2009
Dashi I. Singham; Lee W. Schruben
Sequential stopping rules applied to confidence interval procedures (CIPs) may lead to coverage that is less than nominal. This paper introduces a method for estimating coverage functions analytically in order to evaluate the potential loss of coverage. This method also provides an estimate for the distribution of the stopping time of the procedure. Knowledge of coverage functions could help evaluate and compare confidence interval procedures while avoiding lengthy empirical testing. Numerical implementation of our method shows that analytical coverage functions approximate those calculated empirically. Analytical coverage functions can be used to explain why many sequential procedures do not provide adequate coverage.
winter simulation conference | 2011
Dashi I. Singham; Meredith Therkildsen; Lee W. Schruben
Simulation flocking has been introduced as a method for generating simulation input from multivariate dependent time series for sensitivity and risk analysis. It can be applied to data for which a parametric model is not readily available or imposes too many restrictions on the possible inputs. This method uses techniques from agent-based modeling to generate a flock of boids that follow the data. In this paper, we apply simulation flocking to a border crossing scenario to determine if waypoints simulated from flocking can be used to provide improved information on the number of hostiles successfully crossing the border. Analysis of the output reveals scenario limitations and potential areas of improvement in the patrol strategy.
European Journal of Operational Research | 2018
Wenbo Cai; Dashi I. Singham
Mechanism design problems optimize contract offerings from a principal to different types of agents who have private information about their demands for a product or a service. We study the implications of uncertainty in agents’ demands on the principal’s contracts. Specifically, we consider the setting where agents’ demands follow heterogeneous distributions and the principal offers a menu of contracts stipulating quantities and transfer payments for each demand distribution. We present analytical solutions for the special case when there are two distributions each taking two discrete values, as well as a method for deriving analytical solutions from numerical solutions. We describe one application of the model in carbon capture and storage systems to demonstrate various types of optimal solutions and to obtain managerial insights.
winter simulation conference | 2016
Dashi I. Singham; Roberto Szechtman
We introduce a new framework for performing multiple comparisons with a standard when simulation models are available to estimate the performance of many different systems. In this setting, a large proportion of the systems have mean performance from some known null distribution, and the goal is to select alternative systems whose means are different from that of the null distribution. We employ empirical Bayes ideas to achieve a bound on the false discovery rate (proportion of selected systems from the null distribution) and a desired probability an alternate type system is selected.
ACM Transactions on Modeling and Computer Simulation | 2014
Dashi I. Singham
The sample size decision is crucial to the success of any sampling experiment. More samples imply better confidence and precision in the results, but require higher costs in terms of time, computing power, and money. Analysts often choose sequential stopping rules on an ad hoc basis to obtain confidence intervals with desired properties without requiring large sample sizes. However, the choice of stopping rule can affect the quality of the interval produced in terms of the coverage, precision, and replication cost. This article introduces methods for choosing and evaluating stopping rules for confidence interval procedures. We develop a general framework for assessing the quality of a broad class of stopping rules applied to independent and identically distributed data. We introduce coverage profiles that plot the coverage according to the stopping time and reveal situations when the coverage could be unexpectedly low. Finally, we recommend simple techniques for obtaining acceptable or optimal rules.