Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Tofallis is active.

Publication


Featured researches published by Chris Tofallis.


Journal of the Operational Research Society | 1998

Fractional Programming: Theory, Methods and Applications

Chris Tofallis

Introduction. 1. Fractional Programming Applications. 2. Convex, Quasiconvex, Pseudoconvex, Logarithmic Convex, alpham-Convex, and Invex Functions. 3. Methods for Solving Linear Fractional Programming Problems. 4. Nonlinear Fractional Programming. 5. Duality in Fractional Programming. 6. Fractional Programming with Multiple Objective Functions. 7. Fractional Programming in the Complex Space. 8. Special Linear Fractional Programming Problems. 9. Integer and Mixed Integer Linear Fractional Programming. 10. Fractional Transportation Problem. Bibliography. Subject Index. Author Index.


Journal of the Operational Research Society | 2015

A Better Measure of Relative Prediction Accuracy for Model Selection and Model Estimation

Chris Tofallis

Surveys show that the mean absolute percentage error (MAPE) is the most widely used measure of prediction accuracy in businesses and organizations. It is, however, biased: when used to select among competing prediction methods it systematically selects those whose predictions are too low. This has not been widely discussed and so is not generally known among practitioners. We explain why this happens. We investigate an alternative relative accuracy measure which avoids this bias: the log of the accuracy ratio, that is, log (prediction/actual). Relative accuracy is particularly relevant if the scatter in the data grows as the value of the variable grows (heteroscedasticity). We demonstrate using simulations that for heteroscedastic data (modelled by a multiplicative error factor) the proposed metric is far superior to MAPE for model selection. Another use for accuracy measures is in fitting parameters to prediction models. Minimum MAPE models do not predict a simple statistic and so theoretical analysis is limited. We prove that when the proposed metric is used instead, the resulting least squares regression model predicts the geometric mean. This important property allows its theoretical properties to be understood.


Computers & Operations Research | 1997

Input efficiency profiling: an application to airlines

Chris Tofallis

Abstract Data envelopment analysis (DEA) can produce results which lack discrimination, one consequence of this is that a large proportion of decision-making units (DMUs) appear to be efficient. In addition, because it is a radial measure of efficiency it assumes that all inputs at a naturally enveloped production unit need to be reduced by the same proportion for efficiency to be achieved. It would seem to be more realistic to expect different inputs to have different efficiencies associated with them. A method is presented which retains the original spirit of DEA in trying to extract as much information as possible from the data without applying value judgments in the form of additional constraints. We propose that inputs which are not substitutes for each other be assessed separately and only with respect to outputs which consume them or to which they are otherwise related. In this way input-specific efficiency ratings are derived giving a profile for each DMU. When applied to a data set of 14 airlines the method uncovers inefficiencies which DEA could not find. DEA found half of the airlines to be fully efficient in all factors, whereas our profiling approach was more discriminating and showed that none of the airlines were efficient in all three of the inputs considered. This highlights a significant difference with DEA: by investigating the utilisation of individual inputs we are able to identify bestppractice in each area. It is quite possible that no unit demonstrates best-practice in every area and so each unit will have targets to work towards—this is intuitively appealing as well as providing a link with the philosophy of best practice benchmarking.


European Journal of Operational Research | 2008

Investment Volatility: A Critique of Standard Beta Estimation and a Simple Way Forward

Chris Tofallis

Beta is a widely used quantity in investment analysis. We review the common interpretations that are applied to beta in finance and show that the standard method of estimation - least squares regression - is inconsistent with these interpretations. We present the case for an alternative beta estimator which is more appropriate, as well as being easier to understand and to calculate. Unlike regression, the line fit we propose treats both variables in the same way. Remarkably, it provides a slope that is precisely the ratio of the volatility of the investments rate of return to the volatility of the market index rate of return (or the equivalent excess rates of returns). Hence, this line fitting method gives an alternative beta, which corresponds exactly to the relative volatility of an investment - which is one of the usual interpretations attached to beta.


Journal of the Operational Research Society | 2001

Combining two approaches to efficiency assessment

Chris Tofallis

The advent of data envelopment analysis (DEA) enabled the measurement of efficiency to be extended to the case of multiple outputs. Prior to DEA we had the parametric approach based on multiple regression. We highlight some difficulties associated with these two approaches and present a hybrid which overcomes them whilst maintaining the respective advantages of each. This hybrid models the efficient frontier using an algebraic expression; the resulting smooth representation allows all units to be naturally enveloped and hence slacks to be avoided. (Slacks are potential improvements for inefficient units which are not accounted for in the DEA (radial) score, and so have been problematic for DEA.) The approach identifies the DEA-efficient units and fits a smooth model to them using maximum correlation modelling. This new technique extends the method of multiple regression to the case where there are multiple variables on each side of the model equation (eg outputs and inputs). The resulting expression for the frontier permits managers to estimate the effect on their efficiency score of adjustments in one or more input or output levels.


The Statistician | 1999

Model building with multiple dependent variables and constraints

Chris Tofallis

The most widely used method for finding relationships between several quantities is multiple regression. This however is restricted to a single dependent variable. We present a more general method which allows models to be constructed with multiple varaibles on both sides of an euqation and which can be computed easily using a spreadsheet program. The underlying principle (originating from canonical analysis) is that of maximizing the correlation between the two sides of the model equation.


Archive | 2008

Model fitting for multiple variables by minimising the geometric mean deviation

Chris Tofallis

We consider the problem of fitting a linear model for a number of variables but without treating any one of these variables as special, in contrast to regression where one variable is singled out as being a dependent variable. Each of the variables is allowed to have error or natural variability but we do not assume any prior knowledge about the distribution or variance of this variability. The fitting criterion we use is based on the geometric mean of the absolute deviations in each direction. This combines variables using a product rather than a sum and so allows the method to naturally produce units-invariant models; this property is vital for law-like relationships in the natural or social sciences.


Computers & Industrial Engineering | 2008

Selecting the best statistical distribution using multiple criteria

Chris Tofallis

When selecting a statistical distribution to describe a set of data there are a number of criteria that can be used. Rather than select one of these criteria, we look at how multiple criteria can be combined to make the final selection. Two approaches have previously been presented in Computers and Industrial Engineering. We review these, and present a simpler method based on multiplicative aggregation. This has the advantage of being able to combine measures which are not measured on the same scale without having to use a normalisation procedure. Moreover, this method is scale-invariant, thus re-scaling the criteria values does not affect the final ranking. The method requires strictly positive criteria values measured on a ratio scale. The proposed multiplicative method is transparent, simple to understand, apply and communicate.


Informs Transactions on Education | 2014

Add or Multiply? A Tutorial on Ranking and Choosing with Multiple Criteria

Chris Tofallis

Simple additive weighting is a well-known method for scoring and ranking alternative options based on multiple attributes. However, the pitfalls associated with this approach are not widely appreciated. For example, the apparently innocuous step of normalizing the various attribute data in order to obtain comparable figures leads to markedly different rankings depending on which normalization is chosen. When the criteria are aggregated using multiplication, such difficulties are avoided because normalization is no longer required. This removes an important source of subjectivity in the analysis because the analyst no longer has to make a choice of normalization type. Moreover, it also permits the modelling of more realistic preference behaviour, such as diminishing marginal utility, which simple additive weighting does not provide. The multiplicative approach also has advantages when aggregating the ratings of panel members. This method is not new but has been ignored for too long by both practitioners and teachers. We aim to present it in a nontechnical way and illustrate its use with data on business schools.


Journal of the Operational Research Society | 2014

On constructing a composite indicator with multiplicative aggregation and the avoidance of zero weights in DEA

Chris Tofallis

Recently, there has been interest in combining the use of multiplicative aggregation together with data envelopment analysis (DEA). For example, Blancas et al (2013), Cook and Zhu (2013) and Giambona and Vassallo (2013); the first two of these are JORS papers. The purpose of this note is to highlight differences in the way multiplicative DEA is being applied and to draw attention to the fact that a units-invariant (ie, scaleinvariant) form is available. Moreover, this model avoids the ‘zero weight problem’ in DEA (where criteria are effectively ignored).

Collaboration


Dive into the Chris Tofallis's collaboration.

Top Co-Authors

Avatar

Eren Demir

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge