Lea Friedman
Ben-Gurion University of the Negev
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lea Friedman.
European Journal of Operational Research | 2002
Nicole Adler; Lea Friedman; Zilla Sinuany-Stern
Abstract Within data envelopment analysis (DEA) is a sub-group of papers in which many researchers have sought to improve the differential capabilities of DEA and to fully rank both efficient, as well as inefficient, decision-making units. The ranking methods have been divided in this paper into six, somewhat overlapping, areas. The first area involves the evaluation of a cross-efficiency matrix, in which the units are self and peer evaluated. The second idea, generally known as the super-efficiency method, ranks through the exclusion of the unit being scored from the dual linear program and an analysis of the change in the Pareto Frontier. The third grouping is based on benchmarking, in which a unit is highly ranked if it is chosen as a useful target for many other units. The fourth group utilizes multivariate statistical techniques, which are generally applied after the DEA dichotomic classification. The fifth research area ranks inefficient units through proportional measures of inefficiency. The last approach requires the collection of additional, preferential information from relevant decision-makers and combines multiple-criteria decision methodologies with the DEA approach. However, whilst each technique is useful in a specialist area, no one methodology can be prescribed here as the complete solution to the question of ranking.
European Journal of Operational Research | 1997
Lea Friedman; Zilla Sinuany-Stern
Abstract This paper deals with the evaluation of decision making units which have multiple inputs and outputs. A new method (CCA/DEA) is developed where the Canonical Correlation Analysis (CCA) is utilized to provide a full rank scaling for all the units rather than a categorical classification (for efficient and inefficient units) as done by the Data Envelopment Analysis (DEA). The CCA/DEA approach is an attempt to bridge the gap between the frontier approach of DEA and the average tendencies of statistics (econometrics). Nonparametric statistical tests are employed to validate the consistency between the classification from the DEA and the postclassification that was generated by the CCA/DEA.
European Journal of Operational Research | 1998
Zilla Sinuany-Stern; Lea Friedman
The purpose of this study is to develop a new method which provides for given inputs and outputs the best common weights for all the units that discriminate optimally between the efficient and inefficient units as pregiven by the Data Envelopment Analysis (DEA), in order to rank all the units on the same scale. This new method, Discriminant Data Envelopment Analysis of Ratios (DR/DEA), presents a further post-optimality analysis of DEA for organizational units when their multiple inputs and outputs are given. We construct the ratio between the composite output and the composite input, where their common weights are computed by a new non-linear optimization of goodness of separation between the two pregiven groups. A practical use of DR/DEA is that the common weights may be utilized for ranking the units on a unified scale. DR/DEA is a new use of a two-group discriminant criterion that has been presented here for ratios, rather than the traditional discriminant analysis which applies to a linear function. Moreover, non-parametric statistical tests are employed to verify the consistency between the classification from DEA (efficient and inefficient units) and the post-classification as generated by DR/DEA.
Computers & Operations Research | 1998
Lea Friedman; Zilla Sinuany-Stern
Abstract Data Envelopment Analysis (DEA) has been introduced by Charnes et al. (Charnes, A., Cooper, W. W. and Rhodes, E., Measuring the efficiency of decision making units. Eur. J. Oper. Res., 1978, 2, 429–444) based on the Farrell measurement of productive efficiency (J. Royal Statist. Soc., 1957, A120, 253–290). Originally DEA was designed to evaluate the efficiency of decision making units in the public sector (e.g. schools, towns, hospitals and nations) based on their given multiple inputs and outputs which are not measured in unified units (e.g. money). Eventually DEA was used in business and industry (e.g. bank branches). The DEA merely classifies the units into two dichotomic groups, efficient and inefficient. The purpose of our paper is to fully rank the units from the most efficient to the least efficient within the DEA context. For this purpose we use here three recent ranking methods developed within the DEA framework. The multi-ranking approach is utilized for validating the ranks and forming new overall ranking by combining the ranks which statistically fit the DEA classification. In a way this approach bridges between the DEA frontiers approach and the statistical/econometric approach of averages. The motivating example here is the case of ranking Israeli Industrial Branches. In order to determine the appropriate labor variables (number of man hours, average wage, vs total labor cost) we run two versions of the model. In this paper we use various recent scale ranking methods in the DEA (Data Envelopment Analysis) context. Two methods are based on multivariate statistical analysis: canonical correlation analysis (CCA) and discriminant analysis of ratios (DR/DEA), while the third is based on the cross efficiency matrix (CE/DEA) derived from the DEA. This multirank approach is necessary for rank validation of the model. Their consistency and goodness of fit with the DEA are tested by various nonparametric statistical tests. Once we had validated the consistency among the ranking methods, we constructed a new overall rank combining all of them. Actually, given the DEA results, we here provide ranks that complement the DEA for a full ranking scale beyond the mere classification to two dichotomic groups. This new combined ranking method does not replace the DEA, but it adds a post-optimality analysis to the DEA results. In this paper, we combine the ranking approach with stochastic DEA: each approach is in the forefront of DEA. This is an attempt to bridge between the DEA frontier Pareto Optimum approach and the average approach used in econometrics. Furthermore, the quality of this bridge is tested statistically and thus depends on the data. We demonstrate this method for fully ranking the Industrial Branches in Israel. In order to delete unmeaningful input and output variables, and to increase the fitness between the DEA and the ranking, we utilize the canonical correlation analysis to select the meaningful variables. Furthermore, we run the ranking methods on two sets of variables to select the proper combination of variables which best represents labor.
Journal of the American Statistical Association | 1980
Lea Friedman; Ilya Gertsbakh
Abstract The existence and some properties of maximum likelihood estimators (MLEs) are studied for a minimum-type distribution function corresponding to a minimum of two independent random variables having exponential and Weibull distributions. It is shown that if all three parameters are unknown, then there is a path in the parameter space along which the likelihood function (LF) tends to infinity. It is also proved that if the Weibull shape parameter is known, then the LF is concave, the MLEs exist, and they can be found by solving the set of likelihood equations. Properties of the MLEs for this case are illustrated by a Monte Carlo experiment. A sufficient condition for the existence of MLEs is given for the case of known Weibull scale parameter.
Robotics and Computer-integrated Manufacturing | 1998
Yael Edan; Lea Friedman; Avraham Mehrez; Leonid Slutski
Abstract A three-dimensional statistical evaluation framework for performance measurement of robotic systems has been developed. A specific experimental setup has been designed, implemented, and evaluated for different robot characteristics (velocities and target location) and for a specific task. A statistical analysis method to evaluate performance is demonstrated. Results indicate the significance of this methodology since the capabilities and performance of a particular robot depend on actual operating conditions.
Journal of Business Economics and Management | 2010
Yossi Hadad; Lea Friedman; Aviad A. Israeli
This paper introduces popular methods for ranking alternatives with multiple inputs and multiple outputs in the DEA context. The ranking methods are based on different criteria. Consequently, the ranking of the alternatives are not always the same, particularly as regards the best alternative. The decision maker, however, must make an absolute decision as to the most favored alternative. This study proposes a new ranking method, which is based on the average of the highly correlated ranking method. The new method is applied on a case study of ranking hotels in Israel.
Engineering Costs and Production Economics | 1989
Lea Friedman; Dimitri Golenko-Ginzburg; Zilla Sinuany-Stern
Abstract A production system operates at a speed which is a random variable, with a known distribution function. Given the routine control point, the actual accumulated production observed at that point, and the rate of demand, the decisionmaker determines the next control point. Consequently the interval between two control points will be maximum or minimum under a probabilistic constraint that insures that at any point the actual production will not fall below the planned production at a given confidence level 1-α. The problem is applied to semiautomated production processes where the advancement of the process cannot be measured or viewed continuously, but the process has to be controlled in discrete points by the decision-maker. A formula for determining the next control point for a general case distribution function was developed. Furthermore, examples for the normal, uniform and beta distributions were examined. In general, the solution differs for shortage and surplus and whether the demand rate is smaller or greater than the α-th quantile of the distribution function of the speed.
Mathematical Methods of Operations Research | 1996
Abraham Mehrez; Lea Friedman
In this article we employ the results of Fatti et al. (1987) on the expected value of sample infomation [EVSI] for a class of economic problems dealing with one source of information and a decision to reject or accept an investment project. We consider a framework which allows for the purchasing of many types of costly information aimed at reducing the uncertainty regarding the projects monetary value. The optimal information-seeking strategy is evaluated for a neutral risk taker. Moreover, its upper bound is derived for some special cases.
Central European Journal of Operations Research | 2013
Yossi Hadad; Lea Friedman; Victoria Rybalkin; Zilla Sinuany-Stern
In this paper, we use simulations to investigate the relationship between data envelopment analysis (DEA) efficiency and major production functions: Cobb-Douglas, the constant elasticity of substitution, and the transcendental logarithmic. Two DEA models were used: a constant return to scale (CCR model), and a variable return to scale (BCC model). Each of the models was investigated in two versions: with bounded and unbounded weights. Two cases were simulated: with and without errors in the production functions estimation. Various degrees of homogeneity (of the production function) were tested, reflecting a constant increasing and decreasing return to scale. With respect to the case with errors, three distribution functions were utilized: uniform, normal, and double exponential. For each distribution, 16 levels of the coefficient of variance (CV) were used. In all the tested cases, two measures were analysed: the percentage of efficient units (from the total number of units), and the average efficiency score. We applied a regression analysis to test the relationship between these two efficiency measures and the above parameters. Overall, we found that the degree of homogeneity has the largest effect on efficiency. Efficiency declines as the errors grow (as reflected by larger CV and of the expansion of the probability distribution function away from the centre). The bounds on the weights tend to smooth the effect, and bring the various DEA versions closer to one other. The type of efficiency measure has similar regression tendencies. Finally, the relationship between the efficiency measures and the explanatory variables is quadratic.