Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan Olinsky is active.

Publication


Featured researches published by Alan Olinsky.


European Journal of Operational Research | 2003

The comparative efficacy of imputation methods for missing data in structural equation modeling

Alan Olinsky; Shaw Chen; Lisa L. Harlow

Abstract Missing data is a problem that permeates much of the research being done today. Traditional techniques for replacing missing values may have serious limitations. Recent developments in computing allow more sophisticated techniques to be used. This paper compares the efficacy of five current, and promising, methods that can be used to deal with missing data. This efficacy will be judged by examining the percent of bias in estimating parameters. The focus of this paper is on structural equation modeling (SEM), a popular statistical technique, which subsumes many of the traditional statistical procedures. To make the comparison, this paper examines a full structural equation model that is generated by simulation in accord with previous research. The five techniques used for comparison are expectation maximization (EM), full information maximum likelihood (FIML), mean substitution (Mean), multiple imputation (MI), and regression imputation (Regression). All of these techniques, other than FIML, impute missing data and result in a complete dataset that can be used by researchers for other research. FIML, on the other hand, can still estimate the parameters of the model. The study involves two levels of sample size (100 and 500) and seven levels of incomplete data (2%, 4%, 8%, 12%, 16%, 24%, and 32% missing completely at random). After extensive bootstrapping and simulation, the results indicate that FIML is a superior method in the estimation of most different types of parameters in a SEM format. Furthermore, MI is found to be superior in the estimation of standard errors. Multiple imputation (MI) also is an excellent estimator, with the exception of datasets with over 24% missing information. Considering the fact that FIML is a direct method and does not actually impute the missing data, whereas MI does, and can yield a complete set of data for the researcher to analyze, we conclude that MI, because of its theoretical and distributional underpinnings, is probably most promising for future applications in this field.


The Journal of Education for Business | 2010

A Comparison of Logistic Regression, Neural Networks, and Classification Trees Predicting Success of Actuarial Students

Phyllis Schumacher; Alan Olinsky; John Quinn; Richard Manning Smith

The authors extended previous research by 2 of the authors who conducted a study designed to predict the successful completion of students enrolled in an actuarial program. They used logistic regression to determine the probability of an actuarial student graduating in the major or dropping out. They compared the results of this study with those obtained previously, by re-examining the data using neural networks and classification trees, from Enterprise Miner, the SAS data mining package, which can provide a prediction of the dependent variable for all cases in the data set including those with missing values.


International Journal of Mathematical Education in Science and Technology | 2004

A Genetic Algorithm Approach to Nonlinear Least Squares Estimation.

Alan Olinsky; John Quinn; Paul Mangiameli; Shaw K. Chen

A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than global optimum. Genetic algorithms have been applied successfully to function optimization and therefore would be effective for nonlinear least squares estimation. This paper provides an illustration of a genetic algorithm applied to a simple nonlinear least squares example.


International Journal of Business Intelligence Research | 2010

Data Mining for Health Care Professionals: MBA Course Projects Resulting in Hospital Improvements

Alan Olinsky; Phyllis Schumacher

In this paper, the authors discuss a data mining course that was offered for a cohort of health care professionals employed by a hospital consortium as an elective in a synchronous online MBA program. The students learned to use data mining to analyze data on two platforms, Enterprise Miner, SAS (2008) and XLMiner (an EXCEL add-in). The final assignment for the semester was for the students to analyze a data set from their place of employment. This paper describes the projects and resulting benefits to the companies for which the students worked.


Archive | 2013

Forecasting Patient Volume for a Large Hospital System: A Comparison of the Periodicity of Time Series Data and Forecasting Approaches

Kristin Kennedy; Michael Salzillo; Alan Olinsky; John Quinn

Managing a large hospital network can be an extremely challenging task. Management must rely on numerous pieces of information when making business decisions. This chapter focuses on the number of bed days (NBD) which can be extremely valuable for operational managers to forecast for logistical planning purposes. In addition, the finance staff often requires an expected NBD as input for estimating future expenses. Some hospital reimbursement contracts are on a per diem schedule, and expected NBD is useful in forecasting future revenue.Two models, time regression and autoregressive integrated moving average (ARIMA), are applied to nine years of monthly counts of the NBD for the Rhode Island Hospital System. These two models are compared to see which gives the best fit for the forecasted NBD. Also, the question of summarizing the time data from monthly to quarterly time periods is addressed. The approaches presented in this chapter can be applied to a variety of time series data for business forecasting.


systems, man and cybernetics | 2004

Program scheduling of interrelated programs in a university faculty when there might be strategic behavior

Christian M. Dufournaud; Alan Olinsky; Joseph J. Harrington; John Quinn; Geoff McBoyle

The issue addressed in This work deals with finding course schedules for four different departments simultaneously that meet the requirements of the faculty in two ways: 1) they must offer the required courses to meet the respective program mandates and they must offer courses in a way that allows the programs to be accessible to students at various stages of meeting their degree requirements, and, 2) they must satisfy the faculty about teaching the courses at the assigned times. In the case of satisfying faculty about their assigned times, it is possible to develop a set of prices that prevent faculty members from acting strategically to improve their time slot allocation. This is tested in the context of scheduling four interrelated programs in the Faculty of Environmental studies at the University of Waterloo.


Industrial Management and Data Systems | 1998

Dynamic financial imaging: using multimedia to measure the health of your business

Harold A. Records; Alan Olinsky

Businesses of the late 1990s have available a wealth of data and information that managers use to measure the health of their business and to identify problems and opportunities. Unfortunately, current measures of business activity are static and do not capture the dynamic flows of business transactions as they occur. Warning signs of pending changes are frequently not seen until after the fact when the financial impact of these changes is reported. It is our proposal that business transactions and performance can and should be measured in a dynamic rather than static manner. Recent advances in computer and communications technology combined with powerful multimedia software enable the construction of algorithms and on‐screen instruments which can be used to put business transactions and performance into a dynamic visible form that is readily understood by users.


Archive | 2017

An Oversampling Technique for Classifying Imbalanced Datasets

Son Nguyen; John Quinn; Alan Olinsky

We propose an oversampling technique to increase the true positive rate (sensitivity) in classifying imbalanced datasets (i.e., those with a value for the target variable that occurs with a small frequency) and hence boost the overall performance measurements such as balanced accuracy, G-mean and area under the receiver operating characteristic (ROC) curve, AUC. This oversampling method is based on the idea of applying the Synthetic Minority Oversampling Technique (SMOTE) on only a selective portion of the dataset instead of the entire dataset. We demonstrate the effectiveness of our oversampling method with four real and simulated datasets generated from three models.


Archive | 2016

How Information Spreads in Online Social Networks

Christopher J. Quinn; Matthew James Quinn; Alan Olinsky; John Quinn

Abstract Online social networks are increasingly important venues for businesses to promote their products and image. However, information propagation in online social networks is significantly more complicated compared to traditional transmission media such as newspaper, radio, and television. In this chapter, we will discuss research on modeling and forecasting diffusion of virally marketed content in social networks. Important aspects include the content and its presentation, the network topology, and transmission dynamics. Theoretical models, algorithms, and case studies of viral marketing will be explored.


Archive | 2016

Honing a Predictive Model to Accurately Forecast the Number of Bed Days Needed to Cover Patient Volume for a Large Hospital System

Alan Olinsky; Kristin Kennedy; Michael Salzillo

Abstract Forecasting the number of bed days (NBD) needed within a large hospital network is extremely challenging, but it is imperative that management find a predictive model that best estimates the calculation. This estimate is used by operational managers for logistical planning purposes. Furthermore, the finance staff of a hospital would require an expected NBD as input for estimating future expenses. Some hospital reimbursement contracts are on a per diem schedule, and expected NBD is useful in forecasting future revenue. This chapter examines two ways of estimating the NBD for a large hospital system, and it builds from previous work comparing time regression and an autoregressive integrated moving average (ARIMA). The two approaches discussed in this chapter examine whether using the total or combined NBD for all the data is a better predictor than partitioning the data by different types of services. The four partitions are medical, maternity, surgery, and psychology. The partitioned time series would then be used to forecast future NBD by each type of service, but one could also sum the partitioned predictors for an alternative total forecaster. The question is whether one of these two approaches outperforms the other with a best fit for forecasting the NBD. The approaches presented in this chapter can be applied to a variety of time series data for business forecasting when a large database of information can be partitioned into smaller segments.

Collaboration


Dive into the Alan Olinsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Mangiameli

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaw K. Chen

University of Rhode Island

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge