Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where W. Michael Conklin is active.

Publication


Featured researches published by W. Michael Conklin.


European Journal of Operational Research | 2002

Robust estimation of priorities in the AHP

Stan Lipovetsky; W. Michael Conklin

Abstract A pairwise comparison matrix of the Analytic Hierarchy Process (AHP) is considered as a contingency table that helps to identify unusual or false data elicited from a judge. Special techniques are suggested for robust estimation of priority vectors. They include transformation of a Saaty matrix to matrix of shares of preferences and solving an eigenproblem designed for the transformed matrices. We also introduce an optimizing objective that produces robust priority estimation. Numerical results are compared using the AHP with these differing approaches. The comparison demonstrates that robust estimations yield priority vectors not prone to influence of possible errors among the elements of a pairwise comparison matrix.


European Journal of Operational Research | 2004

Customer Satisfaction Analysis: Identification of Key Drivers

W. Michael Conklin; Ken Powaga; Stan Lipovetsky

Abstract A problem of identifying key drivers in customer satisfaction analysis is considered in relation to Kano theory on the relationship between product quality and customer satisfaction using tools from cooperative game theory and risk analysis. We use Shapley value and attributable risk techniques to identify priorities of key drivers of customer satisfaction, or key dissatisfiers and key enhancers. We demonstrate the theoretical and practical advantages of Shapley value and attributable risk concepts in elaborating optimal marketing strategy.


Computers & Operations Research | 2001

Multiobjective regression modifications for collinearity

Stan Lipovetsky; W. Michael Conklin

Abstract In this work we develop a new multivariate technique to produce regressions with interpretable coefficients that are close to and of the same signs as the pairwise regression coefficients. Using a multiobjective approach to incorporate multiple and pairwise regressions into one objective we reduce this technique to an eigenproblem that represents a hybrid between regression and principal component analyses. We show that our approach corresponds to a specific scheme of ridge regression with a total matrix added to the matrix of correlations. Scope and purpose One of the main goals of multiple regression modeling is to assess the importance of predictor variables in determining the prediction. However, in practical applications inference about the coefficients of regression can be difficult because real data is correlated and multicollinearity causes instability in the coefficients. In this paper we present a new technique to create a regression model that maintains the interpretability of the coefficients. We show with real data that it is possible to generate a model with coefficients that are similar to easily interpretable pairwise relations of predictors with the dependent variable, and this model is similar to the regular multiple regression model in predictive ability.


Pattern Recognition | 2005

Singular value decomposition in additive, multiplicative, and logistic forms

Stan Lipovetsky; W. Michael Conklin

Singular value decomposition (SVD) is widely used in data processing, reduction, and visualization. Applied to a positive matrix, the regular additive SVD by the first several dual vectors can yield irrelevant negative elements of the approximated matrix. We consider a multiplicative SVD modification that corresponds to minimizing the relative errors and produces always positive matrices at any approximation step. Another logistic SVD modification can be used for decomposition of the matrices of proportions, when a regular SVD can yield the elements beyond the zero-one range, while the modified SVD decomposition produces all the elements within the correct range at any step of approximation. Several additional modifications of matrix approximation are also considered.


European Journal of Operational Research | 2006

Data aggregation and Simpson’s paradox gauged by index numbers

Stan Lipovetsky; W. Michael Conklin

Abstract Simpson’s paradox is a phenomenon occurring in data aggregation in complex systems. It consists in the increase (decrease) of the rate in the data aggregate at the higher level with the simultaneous decrease (increase) of the rate in each subgroup of the lower levels of the hierarchy. Although the nature of this paradox is known, it is difficult to interpret it without gauging the reasons of its occurrence. To capture the causes of the paradox, we elaborated some measures of index analysis. We suggest to apply these measures for estimation of changes due to the partial rates and the structural effects. Using numerical examples from the marketing research field, we show how the elaborated gauges evaluate absolute and relative changes, helping to interpret the incidence cases.


International Journal of Mathematical Education in Science and Technology | 2004

Enhance-synergism and suppression effects in multiple regression

Stan Lipovetsky; W. Michael Conklin

Relations between pairwise correlations and the coefficient of multiple determination in regression analysis are considered. The conditions for the occurrence of enhance-synergism and suppression effects when multiple determination becomes bigger than the total of squared correlations of the dependent variable with the regressors are discussed. It is shown that such effects can occur just for stochastic relations among the variables that have non-transitive signs of pairwise correlations. Consideration of these problems facilitates better understanding the properties of regression.


International Journal of Mathematical Education in Science and Technology | 2003

Classroom note: A model for considering multicollinearity

Stan Lipovetsky; W. Michael Conklin

In this work a simple and convenient model is proposed for studying features of the multicollinearity effect in regression analysis. Using some reasonable approximations a multivariate regression is reduced to a kind of bivariate regression by each predictor. This approach yields some criteria for identifying the cases of evident multicollinearity, and leads to a better understanding of properties of multiple regression.


International Journal of Mathematical Education in Science and Technology | 2001

Regression as weighted mean of partial lines: interpretation, properties, and extensions

Stan Lipovetsky; W. Michael Conklin

This paper presents a useful interpretation of linear regression as the weighted mean among all the lines going via each of the two observed points. It is shown that the coefficient of pairwise regression equals the averaged tangent of all the partial lines, and this description is extended to the multiple regression as well. This representation is used to consider weighted parametric regressions and non-parametric regressions described from a unified platform.


Advances in Adaptive Data Analysis | 2015

MaxDiff Priority Estimations with and without HB-MNL

Stan Lipovetsky; W. Michael Conklin

Maximum difference (MaxDiff) is a discrete choice modeling approach widely used in marketing research for finding utilities and preference probabilities among multiple alternatives. It can be seen as an extension of the paired comparison in Thurstone and Bradley–Terry techniques for the simultaneous presenting of three, four or more items to respondents. A respondent identifies the best and the worst ones, so the remaining are deemed intermediate by preference alternatives. Estimation of individual utilities is usually performed in a hierarchical Bayesian (HB)-multinomial-logit (MNL) modeling. MNL model can be reduced to a logit model by the data composed of two specially constructed design matrices of the prevalence from the best and the worst sides. The composed data can be of a large size which makes logistic modeling less precise and very consuming in computer time and memory. This paper describes how the results for utilities and choice probabilities can be obtained from the raw data, and instead of HB methods the empirical Bayes techniques can be applied. This approach enriches MaxDiff and is useful for estimations on large data sets. The results of analytical approach are compared with HB-MNL and several other techniques.


Technometrics | 2005

Multivariate Bayesian Statistics: Models for Source Separation and Signal Unmixing

W. Michael Conklin

contains a few good case studies directly related to quality engineering, many of the examples have nothing to do with industry (e.g., measurements on the shapes and sizes of painted turtle carapaces, weight of middle-aged men in fitness clubs, diagnosis of liver disease). These examples are used to illustrate the methodologies as they are being developed and make up the bulk of the results given in the book. On the bright side, the case studies are engaging. The data analysis objectives of the case studies are clearly described, and the presentations of results show how multivariate analysis can address the objectives. One could argue that the goal of a book describing application of multivariate methods to specific types of problems should aim to provide reads with a “feel” for how these methods work for their problems. For example, I have spent years encouraging microbiologists to use “industrial statistics,” such as response surface methods or control charts. I have found that once they see indepth case studies within their area of science that address the nuances of their data, they are almost completely on board. In this sense, the authors moderately succeed using their case studies. However, I would suggest that they make the data available to the public, because many people learn by reproducing examples that they study. To enable quality organizations to better use multivariate methods, this text should be supplemented with others. For example, the chapter on discrimination hits the major points for classical multivariate methods; however, a vast array of tools for discrimination currently exist, and the classical methods may not fit the problem well and may be far from optimal. For instance, many on-line measurement systems produce large amounts of data on each sample, such as optical coordinate measurement machines, so there may be more variables than observations. Classical methods cannot handle this situation well (e.g., LDA with more variables than observations). I suggest the text by Hastie, Tibshirani, and Friedman (2001) to supplement this book if more expertise is required for discrimination, clustering, and principal components analysis, and the text by Mason and Young (2001) for a more in-depth resource for multivariate control charting. All in all, the authors meet their intended goals somewhat. This text provides the motivation to use multivariate analysis, but could do a better job providing the means.

Collaboration


Dive into the W. Michael Conklin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge