Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Ledolter is active.

Publication


Featured researches published by Johannes Ledolter.


Journal of the American Statistical Association | 1995

Monte Carlo EM Estimation for Time Series Models Involving Counts

Kung-Sik Chan; Johannes Ledolter

Abstract The observations in parameter-driven models for time series of counts are generated from latent unobservable processes that characterize the correlation structure. These models result in very complex likelihoods, and even the EM algorithm, which is usually well suited for problems of this type, involves high-dimensional integration. In this article we discuss a Monte Carlo EM (MCEM) algorithm that uses a Markov chain sampling technique in the calculation of the expectation in the E step of the EM algorithm. We propose a stopping criterion for the algorithm and provide rules for selecting the appropriate Monte Carlo sample size. We show that under suitable regularity conditions, an MCEM algorithm will, with high probability, get close to a maximizer of the likelihood of the observed data. We also discuss the asymptotic efficiency of the procedure. We illustrate our Monte Carlo estimation method on a time series involving small counts: the polio incidence time series previously analyzed by Zeger.


The Journal of Business | 1987

Some Further Evidence on the Stochastic Properties of Systematic Risk

Daniel W. Collins; Johannes Ledolter; Judy Rayburn

Although there is consensus in the finance literature that the beta risk of equity securities is stochastic, there is considerable disagreement as to whether the var iation is purely random or exhibits autocorrelation through time. To investigate this issue, the authors employ a model that allows beta t o exhibit both random and autoregressive behavior simultaneously. The y test this model against alternative specifications on a large sampl e of individual securities and randomly formed portfolios comprising 10, 50, and 100 securities. Results are also presented for portfolios formed according to firm size. Copyright 1987 by the University of Chicago.


Anesthesiology | 2005

Bayesian Prediction Bounds and Comparisons of Operating Room Times Even for Procedures with Few or No Historic Data

Franklin Dexter; Johannes Ledolter

Background:Lower prediction bounds (e.g., for fasting), upper prediction bounds (e.g., to schedule delays between sequential surgeons), comparisons of operating room (OR) times (e.g., when sequencing cases among ORs), and quantification of case uncertainty (e.g., for sequencing a surgeon’s list of cases) can be done accurately for combinations of surgeon and scheduled procedure(s) by using historic OR times. The authors propose that when there are few or no historic data, the predictive distribution of the OR time of a future case be centered at the scheduled OR time, and its proportional uncertainty be based on that of other surgeons and procedures. When there are a moderate or large number of historic data, the historic data alone are used in the prediction. When there are a small number of historic data, a weighted combination is used. Methods:This Bayesian method was tested with all 65,661 cases from a hospital. Results:Bayesian prediction bounds were accurate to within 2% (e.g., the 5% lower bounds exceeded 4.9% of the actual OR times). The predicted probability of one case taking longer than another was estimated to within 0.7%. When sequencing a surgeon’s list of cases to reduce patient waiting past scheduled start times, both the scheduled OR time and the variability in historic OR times should be used together when assessing which cases should be done first. Conclusions:The authors validated a practical way to calculate prediction bounds and compare the OR times of all cases, even those with few or no historic data for the surgeon and the scheduled procedure(s).


Anesthesia & Analgesia | 2005

Validation of statistical methods to compare cancellation rates on the day of surgery.

Franklin Dexter; Eric Marcon; Richard H. Epstein; Johannes Ledolter

We investigated the validity of several statistical methods to monitor the cancellation of electively scheduled cases on the day of surgery: &khgr;2 test, Fisher’s exact test, Rao and Scott test, Student’s t-test, Clopper-Pearson confidence intervals, and Chen and Tipping modification of the Clopper-Pearson confidence intervals. Discrete-event computer simulation over many years was used to represent surgical suites with an unchanging cancellation rate. Because the true cancellation rate was fixed, the accuracy of the statistical methods could be determined. Cancellations caused by medical events, rare events, cases lasting longer than scheduled, and full postanesthesia or intensive care unit beds were modeled. We found that applying Student’s two-sample t-test to the transformation of the numbers of cases and canceled cases from each of six 4-wk periods was valid for most conditions. We recommend that clinicians and managers use this method in their quality monitoring reports. The other methods gave inaccurate results. For example, using &khgr;2 or Fisher’s exact test, hospitals may erroneously determine that cancellation rates have increased when they really are unchanged. Conversely, if inappropriate statistical methods are used, administrators may claim success at reducing cancellation rates when, in fact, the problem remains unresolved, affecting patients and clinicians.


Anesthesia & Analgesia | 2005

Tactical Decision Making for Selective Expansion of Operating Room Resources Incorporating Financial Criteria and Uncertainty in Subspecialties' Future Workloads

Franklin Dexter; Johannes Ledolter; Ruth E. Wachtel

We considered the allocation of operating room (OR) time at facilities where the strategic decision had been made to increase the number of ORs. Allocation occurs in two stages: a long-term tactical stage followed by short-term operational stage. Tactical decisions, approximately 1 yr in advance, determine what specialized equipment and expertise will be needed. Tactical decisions are based on estimates of future OR workload for each subspecialty or surgeon. We show that groups of surgeons can be excluded from consideration at this tactical stage (e.g., surgeons who need intensive care beds or those with below average contribution margins per OR hour). Lower and upper limits are estimated for the future demand of OR time by the remaining surgeons. Thus, initial OR allocations can be accomplished with only partial information on future OR workload. Once the new ORs open, operational decision-making based on OR efficiency is used to fill the OR time and adjust staffing. Surgeons who were not allocated additional time at the tactical stage are provided increased OR time through operational adjustments based on their actual workload. In a case study from a tertiary hospital, future demand estimates were needed for only 15% of surgeons, illustrating the practicality of these methods for use in tactical OR allocation decisions.


International Journal of Forecasting | 1989

The effect of additive outliers on the forecasts from ARIMA models

Johannes Ledolter

Assume that a time series of length n = T+k includes an additive outlier at time T and suppose this fact is ignored in the estimation of the coefficients and the calculation of the forecasts. In this paper we derive the resulting increase in the mean square of the l-step-ahead forecast error. We show that this increase is due to (i) a carry-over effect of the outlier on the forecast, and (ii) a bias in the estimates of the autoregressive and moving average coefficients. Looking at several special cases we find that this increase is rather small provided that the outlier occurs not too close to the forecast origin. In such cases the point forecasts are largely unaffected. Our conclusion concerning the width of the prediction intervals is different, however. Since outliers in a time series inflate the estimate of the innovation variance, we find that the estimated prediction intervals are quite sensitive to additive outliers.


Anesthesiology | 2005

Estimating the incidence of prolonged turnover times and delays by time of day.

Franklin Dexter; Richard H. Epstein; Eric Marcon; Johannes Ledolter

Background:Prolonged turnover times cause frustration and can thereby reduce professional satisfaction and the workload surgeons bring to a hospital. Methods:The authors analyzed 1 yr of operating room information system data from two academic, tertiary hospitals and Monte-Carlo simulations of a 15–operating room hospital surgical suite. Results:Confidence interval widths for the mean turnover times at the hospitals were negligible when compared with the variation in sample mean turnover times among 31 hospitals. The authors developed a statistical method to estimate the proportion of all turnovers that were prolonged (> 15 min beyond mean) and that occurred during specified hours of the day. Confidence intervals for the proportions corrected for the effect of multiple comparisons. Statistical assumptions were satisfied at the two studied hospitals. The confidence intervals achieved family-wise type I error rates accurate to within 0.5% when applied to between five and nineteen 4-week periods of data. The diurnal pattern in the proportions of all turnovers that were prolonged provided different, more managerially relevant information than the time course throughout the day in the percentage of turnovers at each hour that were prolonged. Conclusions:Benchmarking sample mean turnover times among hospitals, without the use of confidence intervals, can be valid and useful. The authors successfully developed and validated a statistical method to estimate the percentage of turnover times at a surgical suite that are prolonged and occur at specified times of the day. Managers can target their quality improvement efforts on times of the day with the largest percentages of prolonged turnovers.


Journal of Quality Technology | 1991

A control chart based on ranks

Peter Hackl; Johannes Ledolter

A control chart technique that considers exponentially weighted moving averages (EWMA) of the ranks of the observations is proposed. This nonparametric technique is outlier-resistant and performs well if one is concerned about sudden shifts in the proce..


Journal of Geophysical Research | 2010

Addendum to “Wind speed trends over the contiguous United States”

S. C. Pryor; Johannes Ledolter

[1] An earlier paper (Pryor et al., 2009) reports linear trends for annual percentiles of 10 m wind speeds from across the United States based on ordinary linear regression applied without consideration of temporal autocorrelation. Herein we show significant temporal autocorrelation in annual metrics from approximately half of all surface and upper air wind speed time series and present analyses that indicate at least some fraction of the temporal autocorrelation at the annual time scale may be due to the influence of persistent low-frequency climate modes as manifest in teleconnection indices. Treatment of the temporal autocorrelation slightly reduces the number of stations for which linear trends in10 m wind speeds are deemed significant but does not alter the trend magnitudes relative to those presented by Pryor et al. (2009). Analyses conducted accounting for the autocorrelation indicate 55% of annual 50th percentile 10 m wind speed time series, and 45% of 90th percentile annual 10 m wind speed time series derived from the National Climate Data Center DS3505 data set exhibit significant downward trends over the period 1973-2005. These trends are consistent with previously reported declines in pan evaporation but are not present in 10 m wind speeds from reanalysis products or upper air wind speeds from the radiosonde network.


Anesthesia & Analgesia | 2011

Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

Johannes Ledolter; Franklin Dexter; Richard H. Epstein

Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

Collaboration


Dive into the Johannes Ledolter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wolfgang Mayrhofer

Vienna University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guido Strunk

Vienna University of Economics and Business

View shared research outputs
Researchain Logo
Decentralizing Knowledge