Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew L. Johnson is active.

Publication


Featured researches published by Andrew L. Johnson.


Operations Research | 2010

Data Envelopment Analysis as Nonparametric Least-Squares Regression

Timo Kuosmanen; Andrew L. Johnson

Data envelopment analysis (DEA) is known as a nonparametric mathematical programming approach to productive efficiency analysis. In this paper, we show that DEA can be alternatively interpreted as nonparametric least-squares regression subject to shape constraints on the frontier and sign constraints on residuals. This reinterpretation reveals the classic parametric programming model by Aigner and Chu [Aigner, D., S. Chu. 1968. On estimating the industry production function. Amer. Econom. Rev.58 826--839] as a constrained special case of DEA. Applying these insights, we develop a nonparametric variant of the corrected ordinary least-squares (COLS) method. We show that this new method, referred to as corrected concave nonparametric least squares (C2NLS), is consistent and asymptotically unbiased. The linkages established in this paper contribute to further integration of the econometric and axiomatic approaches to efficiency analysis.


European Journal of Operational Research | 2011

Guidelines for using variable selection techniques in data envelopment analysis

Niranjan R. Nataraja; Andrew L. Johnson

Model misspecification has significant impacts on data envelopment analysis (DEA) efficiency estimates. This paper discusses the four most widely-used approaches to guide variable specification in DEA. We analyze efficiency contribution measure (ECM), principal component analysis (PCA-DEA), a regression-based test, and bootstrapping for variable selection via Monte Carlo simulations to determine each approachs advantages and disadvantages. For a three input, one output production process, we find that: PCA-DEA performs well with highly correlated inputs (greater than 0.8) and even for small data sets (less than 300 observations); both the regression and ECM approaches perform well under low correlation (less than 0.2) and relatively larger data sets (at least 300 observations); and bootstrapping performs relatively poorly. Bootstrapping requires hours of computational time whereas the three other methods require minutes. Based on the results, we offer guidelines for effectively choosing among the four selection methods.


European Journal of Operational Research | 2012

One-stage and two-stage DEA estimation of the effects of contextual variables

Andrew L. Johnson; Timo Kuosmanen

Two-stage data envelopment analysis (2-DEA) is commonly used in productive efficiency analysis to estimate the effects of operational conditions and practices on performance. In this method the DEA efficiency estimates are regressed on contextual variables representing the operational conditions. We re-examine the statistical properties of the 2-DEA estimator, and find that it is statistically consistent under more general conditions than earlier studies assume. We further show that the finite sample bias of DEA in the first stage carries over to the second stage regression, causing bias in the estimated coefficients of the contextual variables. This bias is particularly severe when the contextual variables are correlated with inputs. To address this shortcoming, we apply the result that DEA can be formulated as a constrained special case of the convex nonparametric least squares (CNLS) regression. Applying the CNLS formulation, we develop a new semi-nonparametric one-stage estimator for the coefficients of the contextual variables that directly incorporates contextual variables to the standard DEA problem. The proposed method is hence referred to as one-stage DEA (1-DEA). Evidence from Monte Carlo simulations suggests that the new 1-DEA estimator performs systematically better than the conventional 2-DEA estimator both in deterministic and noisy scenarios.


European Journal of Operational Research | 2012

Batch picking in narrow-aisle order picking systems with consideration for picker blocking

Soondo Hong; Andrew L. Johnson; Brett A. Peters

This paper develops strategies to control picker blocking that challenge the traditional assumptions regarding the tradeoffs between wide- and narrow-aisle order picking systems. We propose an integrated batching and sequencing procedure called the indexed batching model (IBM), with the objective of minimizing the total retrieval time (the sum of travel time, pick time and congestion delays). The IBM differs from traditional batching formulations by assigning orders to indexed batches, whereby each batch corresponds to a position in the batch release sequence. We develop a mixed integer programming solution for exact control, and demonstrate a simulated annealing procedure to solve large practical problems. Our results indicate that the proposed approach achieves a 5–15% reduction in the total retrieval time primarily by reducing picker blocking. We conclude that the IBM is particularly effective in narrow-aisle picking systems.


Computers & Operations Research | 2010

Three-stage DEA models for incorporating exogenous inputs

Sarah M. Estelle; Andrew L. Johnson; John Ruggiero

In this paper, we discuss three-stage models that control for exogenous, non-discretionary inputs in data envelopment analysis. In a recent article in this journal, Monte Carlo analysis was employed to compare and contrast alternative DEA models that measure efficiency in the presence of exogenous variables. The methodology for comparison was flawed, calling into question the results presented. We introduce new second-stage models and compare and contrast them with simulated data.


European Journal of Operational Research | 2008

OUTLIER DETECTION IN TWO-STAGE SEMIPARAMETRIC DEA MODELS

Andrew L. Johnson; Leon F. McGinnis

In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results, particularly for two-stage analyses. In the DEA-related literature, prior work on this issue has focused on the efficient frontier as a basis for detecting outliers. An iterative approach for dealing with the potential for one outlier to mask the presence of another has been proposed but not demonstrated. This paper proposes using both the efficient frontier and the inefficient frontier to identify outliers and thereby improve the accuracy of second stage results in two-stage nonparametric analysis. The iterative outlier detection approach is implemented in a leave-one-out method using both the efficient frontier and the inefficient frontier and demonstrated in a two-stage semi-parametric bootstrapping analysis of a classic data set. The results show that the conclusions drawn can be different when outlier identification includes consideration of the inefficient frontier.


European Journal of Operational Research | 2013

A More Efficient Algorithm for Convex Nonparametric Least Squares

Chia Yen Lee; Andrew L. Johnson; Erick Moreno-Centeno; Timo Kuosmanen

Convex Nonparametric Least Squares (CNLSs) is a nonparametric regression method that does not require a priori specification of the functional form. The CNLS problem is solved by mathematical programming techniques; however, since the CNLS problem size grows quadratically as a function of the number of observations, standard quadratic programming (QP) and Nonlinear Programming (NLP) algorithms are inadequate for handling large samples, and the computational burdens become significant even for relatively small samples. This study proposes a generic algorithm that improves the computational performance in small samples and is able to solve problems that are currently unattainable. A Monte Carlo simulation is performed to evaluate the performance of six variants of the proposed algorithm. These experimental results indicate that the most effective variant can be identified given the sample size and the dimensionality. The computational benefits of the new algorithm are demonstrated by an empirical application that proved insurmountable for the standard QP and NLP algorithms.


European Journal of Operational Research | 2012

Two-dimensional efficiency decomposition to measure the demand effect in productivity analysis

Chia Yen Lee; Andrew L. Johnson

This paper proposes a two-dimensional efficiency decomposition (2DED) of profitability for a production system to account for the demand effect observed in productivity analysis. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component’s profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. An empirical study based on data from 2006 to 2008 for the US airline industry finds that the regress of productivity is mainly caused by a demand fluctuation in 2007–2008 rather than technical regression in production capabilities.


Iie Transactions | 2012

Large-scale order batching in parallel-aisle picking systems

Soondo Hong; Andrew L. Johnson; Brett A. Peters

This article discusses an order batching formulation and heuristic solution procedure suitable for a large-scale order picking situation in parallel-aisle picking systems. Order batching can decrease the total travel distance of pickers not only through reducing the number of trips but also by shortening the length of each trip. In practice, some order picking systems retrieve 500–2000 orders per hour and include ten or more aisles. The proposed heuristic produces near-optimal solutions with run times of roughly 70 s in a ten-aisle system. The quality of the solutions is demonstrated by comparing with a lower bound developed as a linear programming relaxation of the batching formulation developed in this article. A simulation study indicates that the proposed heuristic outperforms existing methods described in the literature or used in practice. In addition, the resulting order picking operations are relatively robust to picker blocking.


Annals of Operations Research | 2014

Nonparametric measurement of productivity and efficiency in education

Andrew L. Johnson; John Ruggiero

Nondiscretionary environmental inputs are critical in explaining relative efficiency differences and productivity changes in public sector applications. For example, the literature on education production shows that school districts perform better when student poverty is lower. In this paper, we extend the nonparametric approach to decompose the Malmquist Productivity Index suggested by Färe et al. (American Economic Rewiew 84:66–83, 1994) into efficiency, technological and environmental changes. The approach is applied to analyze educational production of Ohio school districts. Applying the extended approach in an analysis of the educational production of 604 school districts in Ohio, we find changes in environmental harshness are the primary drivers in productivity changes of underperforming school districts, while technical progress drives the performance of top performing school districts.

Collaboration


Dive into the Andrew L. Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chia Yen Lee

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Soondo Hong

Pusan National University

View shared research outputs
Top Co-Authors

Avatar

Brett A. Peters

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leon F. McGinnis

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dima Nazzal

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wen-Chih Chen

National Chiao Tung University

View shared research outputs
Researchain Logo
Decentralizing Knowledge