Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José H. Dulá is active.

Publication


Featured researches published by José H. Dulá.


Annals of Operations Research | 2006

Validating DEA as a ranking tool: An application of DEA to assess performance in higher education

Marie-Laure Bougnol; José H. Dulá

There is a general interest in ranking schemes applied to complex entities described by multiple attributes. Published rankings for universities are in great demand but are also highly controversial. We compare two classification and ranking schemes involving universities; one from a published report, ‘Top American Research Universities’ by the University of Floridas TheCenter and the other using DEA. Both approaches use the same data and model. We compare the two methods and discover important equivalences. We conclude that the critical aspect in classification and ranking is the model. This suggests that DEA is a suitable tool for these types of studies.


Informs Journal on Computing | 1998

An Algorithm for Identifying the Frame of a Pointed Finite Conical Hull

José H. Dulá; Richard V. Helgason; N. Venugopal

We present an algorithm for identifying the extreme rays of the conical hull of a finite set of vectors whose generated cone is pointed. This problem appears in diverse areas including sto-chastic programming, computational geometry, and non-parametric efficiency measurement. The standard approach consists of solving a linear program for every element of the set of vectors. The new algorithm differs in that it solves fewer and substantially smaller LPs. Extensive computational testing vali-dates the algorithm and demonstrates that for a wide range of problems it is computationally superior to the standard approach.


Journal of Productivity Analysis | 2001

A Computational Framework for Accelerating DEA

José H. Dulá; Robert M. Thrall

We introducea new computational framework for DEA that reduces computationtimes and increases flexibility in applications over multiplemodels and orientations.The process is based on the identificationof frames--minimal subsets of the data needed to describethe models in the problems--for each of the four standardproduction possibility sets. It exploits the fact that the framesof the models are closely interrelated. Access to a frame ofa production possibility set permits a complete analysis in asecond phase for the corresponding model either oriented or orientation-free.This second phase proceeds quickly especially if the frame isa small subset of the data points. Besides accelerating computations,the new framework imparts greater flexibility to the analysisby not committing the analyst to a model or orientation whenperforming the bulk of the calculations. Computational testingvalidates the results and reveals that, with a minimum additionaltime over what is required for a full DEA study for a given modeland specified orientation, one can obtain the analysis for thefour models and all orientations.


Pesquisa Operacional | 2002

Computations in DEA

José H. Dulá

DEA is a well-established, widely used, and powerful analytical resource in the toolbox of the OR/MS analyst. It is used to assess relative efficiency of many, functionally similar, entities. It has applications in diverse areas including finance and banking, education, and healthcare. DEA is computationally intensive and, as the scale of applications grows, this intensity rapidly becomes one of the limiting factors in its utility. In this paper, we explore computations in DEA. We investigate the theory behind schemes, procedures and algorithms used in performing a DEA study and we report on current practices ranging from the basic and standard to the advanced and sophisticated. Our objective is to give researchers and practitioners an appreciation for the computational aspects of DEA that will permit them to understand the performance, problems, complications, limitations, as well as the potential of this technique.


Annals of Operations Research | 2006

Benefit-cost analysis using data envelopment analysis

Norman Keith Womer; Marie-Laure Bougnol; José H. Dulá; Donna L. Retzlaff-Roberts

Benefit-cost analysis is required by law and regulation throughout the federal government. Robert Dorfman (1996) declares ‘Three prominent shortcomings of benefit-cost analysis as currently practiced are (1) it does not identify the population segments that the proposed measure benefits or harms (2) it attempts to reduce all comparisons to a single dimension, generally dollars and cents and (3) it conceals the degree of inaccuracy or uncertainty in its estimates.’ The paper develops an approach for conducting benefit-cost analysis derived from data envelopment analysis (DEA) that overcomes each of Dorfmans objections. The models and methodology proposed give decision makers a tool for evaluating alternative policies and projects where there are multiple constituencies who may have conflicting perspectives. This method incorporates multiple incommensurate attributes while allowing for measures of uncertainty. An application is used to illustrate the method.


Informs Journal on Computing | 2011

An Algorithm for Data Envelopment Analysis

José H. Dulá

The standard approach to process a data envelopment analysis (DEA) data set, and the one in widespread use, consists of solving as many linear programs (LPs) as there are entities. The dimensions of these LPs are determined by the size of the data sets, and they keep their dimensions as each decision-making unit is scored. This approach can be computationally demanding, especially with large data sets. We present an algorithm for DEA based on a two-phase procedure. The first phase identifies the extreme efficient entities, the frame, of the production possibility set. The frame is then used in a second phase to score the rest of the entities. The new procedure applies to any of the four standard DEA returns to scale. It also imparts flexibility to a DEA study because it postpones the decision about orientation, benchmarking measurements, etc., until after the frame has been identified. Extensive computational testing on large data sets verifies and validates the procedure and demonstrates that it is computationally fast.


Computers & Operations Research | 2009

Preprocessing DEA

José H. Dulá; F. J. López

We collect, organize, analyze, implement, test, and compare a comprehensive list of ideas for preprocessors for entity classification in DEA. We limit our focus to procedures that do not involve solving LPs. The procedures are adaptations from previous work in DEA and in computational geometry. The result is five preprocessing methods three of which are new for DEA. Testing shows that preprocessors have the potential to classify a large number of DMUs economically making them an important computational tool especially in large scale applications. Scope and purpose: This is a comprehensive study of preprocessing in DEA. The purpose is to provide tools that will reduce the computational burden of DEA studies especially in large scale applications.


Journal of the Operational Research Society | 2008

Adding and removing an attribute in a DEA model: theory and processing

Francisco J. López; José H. Dulá

We present a theoretical and computational study of the impact of inserting a new attribute and removing an old attribute in a data envelopment analysis (DEA) model. Our objective is to obviate a portion of the computational effort needed to process such model changes by studying how the efficient/inefficient status of decision-making units (DMUs) is affected. Reducing computational efforts is important since DEA is known to be computationally intensive, especially in large-scale applications. We present a comprehensive theoretical study of the impact of attribute insertion and removal in DEA models, which includes sufficient conditions for identifying efficient DMUs when an attribute is added and inefficient DMUs when an attribute is removed. We also introduce a new procedure, HyperClimb, specially designed to quickly identify some of the new efficient DMUs, without involving LPs, when the model changes with the addition of an attribute. We report on results from computational tests designed to assess this procedures effectiveness.


Journal of the Operational Research Society | 2009

A geometrical approach for generalizing the production possibility set in DEA

José H. Dulá

Consider a Data Envelopment Analysis (DEA) study with n Decision Making Units (DMUs) and a model with m inputs plus outputs. The data for this study are a point set, {a1,…,an}, in . A DMU is efficient if its data point is located on the efficient frontier portion of the boundary of an empirical production possibility set, a polyhedral envelopment hull described by the data. From this perspective, DEA efficiency is a purely geometric concept that can be applied to general point sets to identify records with extreme properties. The generalized approach permits new applications for nonparametric frontiers. Examples of such applications are fraud detection, auditing, security, and appraisals. We extend the concept of DEA efficiency to frontier outliers in general envelopment hulls.


European Journal of Operational Research | 1997

Equivalences between Data Envelopment Analysis and the theory of redundancy in linear systems

José H. Dulá

This paper establishes how the non-parametric frontier estimation methodology of Data Envelopment Analysis (DEA) and the classical problem of detecting redundancy in a system of linear inequalities are connected. We present an analysis of the sets generated in two of DEAs models from where the empirical efficient production frontier is established from the point of view of polyhedral set theory. This yields convenient alternative characterizations of these sets which provide new insights about their properties. We use these insights to show how these polyhedral sets connect DEA to redundancy in linear systems. This means that DEA can benefit from a rich and well-established collection of computational and theoretical results which apply directly from redundancy in linear systems.

Collaboration


Dive into the José H. Dulá's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco J. López

Middle Georgia State College

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Paul Brooks

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Amy L. Pakyz

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

B L Hickman

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

F. J. López

Middle Georgia State College

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge