Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. Dennis Cradit is active.

Publication


Featured researches published by J. Dennis Cradit.


Psychometrika | 2001

A variable-selection heuristic for K-means clustering

Michael J. Brusco; J. Dennis Cradit

One of the most vexing problems in cluster analysis is the selection and/or weighting of variables in order to include those that truly define cluster structure, while eliminating those that might mask such structure. This paper presents a variable-selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. The heuristic was subjected to Monte Carlo testing across more than 2200 datasets with known cluster structure. The results indicate the heuristic is extremely effective at eliminating masking variables. A cluster analysis of real-world financial services data revealed that using the variable-selection heuristic prior to the K-means algorithm resulted in greater cluster stability.


Journal of Marketing Research | 2002

A Simulated Annealing Heuristic for a Bicriterion Partitioning Problem in Market Segmentation

Michael J. Brusco; J. Dennis Cradit; Stephanie Stahl

K-means clustering procedures are frequently used to identify homogeneous market segments on the basis of a set of descriptor variables. In practice, however, market research analysts often desire both homogeneous market segments and good explanation of an exogenous response variable. Unfortunately, the relationship between these two objective criteria can be antagonistic, and it is often difficult to find clustering solutions that yield adequate levels for both criteria. The authors present a simulated annealing heuristic for solving bicriterion partitioning problems related to these objectives. A large computational study and an empirical demonstration reveal the effectiveness of the methodology. The authors also discuss limitations and extensions of the method.


Multivariate Behavioral Research | 2008

Cautionary Remarks on the Use of Clusterwise Regression.

Michael J. Brusco; J. Dennis Cradit; Douglas Steinley; Gavin L. Fox

Clusterwise linear regression is a multivariate statistical procedure that attempts to cluster objects with the objective of minimizing the sum of the error sums of squares for the within-cluster regression models. In this article, we show that the minimization of this criterion makes no effort to distinguish the error explained by the within-cluster regression models from the error explained by the clustering process. In some cases, most of the variation in the response variable is explained by clustering the objects, with little additional benefit provided by the within-cluster regression models. Accordingly, there is tremendous potential for overfitting with clusterwise regression, which is demonstrated with numerical examples and simulation experiments. To guard against the misuse of clusterwise regression, we recommend a benchmarking procedure that compares the results for the observed empirical data with those obtained across a set of random permutations of the response measures. We also demonstrate the potential for overfitting via an empirical application related to the prediction of reflective judgment using high school and college performance measures.


Technometrics | 2009

An Exact Algorithm for Hierarchically Well-Formulated Subsets in Second-Order Polynomial Regression

Michael J. Brusco; Douglas Steinley; J. Dennis Cradit

Variable selection in multiple regression requires identifying the best subset from among a set of candidate predictors. In the case of polynomial regression, the variable selection process can be further complicated by the need to obtain subsets that are hierarchically well formulated. We present a branch-and-bound algorithm for selection of the best hierarchically well-formulated subset in second-order polynomial regression. We apply the new algorithm to a well-known data set from the regression literature and compare the results with those obtained from a branch-and-bound algorithm that does not impose the hierarchical constraints. This comparison reveals that the hierarchical constraints yield only a small penalty in explained variation. We offer Fortran and MATLAB implementations of the branch-and-bound algorithms as supplemental materials associated with this work.


British Journal of Mathematical and Statistical Psychology | 2005

Bicriterion methods for partitioning dissimilarity matrices.

Michael J. Brusco; J. Dennis Cradit

Partitioning indices associated with the within-cluster sums of pairwise dissimilarities often exhibit a systematic bias towards clusters of a particular size, whereas minimization of the partition diameter (i.e. the maximum dissimilarity element across all pairs of objects within the same cluster) does not typically have this problem. However, when the partition-diameter criterion is used, there is often a myriad of alternative optimal solutions that can vary significantly with respect to their substantive interpretation. We propose a bicriterion partitioning approach that considers both diameter and within-cluster sums in the optimization problem and facilitates selection from among the alternative optima. We developed several MATLAB-based exchange algorithms that rapidly provide excellent solutions to bicriterion partitioning problems. These algorithms were evaluated using synthetic data sets, as well as an empirical dissimilarity matrix.


International Journal of Operations & Production Management | 2017

Cluster analysis in empirical OM research: survey and recommendations

Michael J. Brusco; Renu Singh; J. Dennis Cradit; Douglas Steinley

Purpose The purpose of this paper is twofold. First, the authors provide a survey of operations management (OM) research applications of traditional hierarchical and nonhierarchical clustering methods with respect to key decisions that are central to a valid analysis. Second, the authors offer recommendations for practice with respect to these decisions. Design/methodology/approach A coding study was conducted for 97 cluster analyses reported in six OM journals during the period spanning 1994-2015. Data were collected with respect to: variable selection, variable standardization, method, selection of the number of clusters, consistency/stability of the clustering solution, and profiling of the clusters based on exogenous variables. Recommended practices for validation of clustering solutions are provided within the context of this framework. Findings There is considerable variability across clustering applications with respect to the components of validation, as well as a mix of productive and undesirable practices. This justifies the importance of the authors’ provision of a schema for conducting a cluster analysis. Research limitations/implications Certain aspects of the coding study required some degree of subjectivity with respect to interpretation or classification. However, in light of the sheer magnitude of the coding study (97 articles), the authors are confident that an accurate picture of empirical OM clustering applications has been presented. Practical implications The paper provides a critique and synthesis of the practice of cluster analysis in OM research. The coding study provides a thorough foundation for how the key decisions of a cluster analysis have been previously handled in the literature. Both researchers and practitioners are provided with guidelines for performing a valid cluster analysis. Originality/value To the best of the authors’ knowledge, no study of this type has been reported in the OM literature. The authors’ recommendations for cluster validation draw from recent studies in other disciplines that are apt to be unfamiliar to many OM researchers.


Communications in Statistics-theory and Methods | 2018

An integrated dominance analysis and dynamic programing approach for measuring predictor importance for customer satisfaction

Michael J. Brusco; J. Dennis Cradit; Susan Brudvig

Abstract Dominance analysis is a procedure for measuring the importance of predictors in multiple regression analysis. We show that dominance analysis can be enhanced using a dynamic programing approach for the rank-ordering of predictors. Using customer satisfaction data from a call center operation, we demonstrate how the integration of dominance analysis with dynamic programing can provide a better understanding of predictor importance. As a cautionary note, we recommend careful reflection on the relationship between predictor importance and variable subset selection. We observed that slight changes in the selected predictor subset can have an impact on the importance rankings produced by a dominance analysis.


British Journal of Mathematical and Statistical Psychology | 2017

Gaussian model‐based partitioning using iterated local search

Michael J. Brusco; Emilie Shireman; Douglas Steinley; Susan Brudvig; J. Dennis Cradit

The emergence of Gaussian model-based partitioning as a viable alternative to K-means clustering fosters a need for discrete optimization methods that can be efficiently implemented using model-based criteria. A variety of alternative partitioning criteria have been proposed for more general data conditions that permit elliptical clusters, different spatial orientations for the clusters, and unequal cluster sizes. Unfortunately, many of these partitioning criteria are computationally demanding, which makes the multiple-restart (multistart) approach commonly used for K-means partitioning less effective as a heuristic solution strategy. As an alternative, we propose an approach based on iterated local search (ILS), which has proved effective in previous combinatorial data analysis contexts. We compared multistart, ILS and hybrid multistart-ILS procedures for minimizing a very general model-based criterion that assumes no restrictions on cluster size or within-group covariance structure. This comparison, which used 23 data sets from the classification literature, revealed that the ILS and hybrid heuristics generally provided better criterion function values than the multistart approach when all three methods were constrained to the same 10-min time limit. In many instances, these differences in criterion function values reflected profound differences in the partitions obtained.


Journal of Mathematical Psychology | 2005

ConPar: a method for identifying groups of concordant subject proximity matrices for subsequent multidimensional scaling analyses

Michael J. Brusco; J. Dennis Cradit


Journal of Operations Management | 2012

Emergent clustering methods for empirical OM research

Michael J. Brusco; Douglas Steinley; J. Dennis Cradit; Renu Singh

Collaboration


Dive into the J. Dennis Cradit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renu Singh

South Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge