Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William E. Spangler is active.

Publication


Featured researches published by William E. Spangler.


Anesthesiology | 2003

Estimating times of surgeries with two component procedures: comparison of the lognormal and normal models.

David P. Strum; Jerrold H. May; Allan R. Sampson; Luis G. Vargas; William E. Spangler

Background Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. Methods The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. Results The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. Conclusions The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.


Health Care Management Science | 2004

Estimating Procedure Times for Surgeries by Determining Location Parameters for the Lognormal Model

William E. Spangler; David P. Strum; Luis G. Vargas; Jerrold H. May

We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.


Journal of Management Information Systems | 1999

Choosing data-mining methods for multiple classification: representational and performance measurement implications for decision support

William E. Spangler; Jerrold H. May; Luis G. Vargas

Data-mining techniques are designed for classification problems in which each observation is a member of one and only one category. We formulate ten data representations that could be used to extend those methods to problems in which observations may be full members of multiple categories. We propose an audit matrix methodology for evaluating the performance of three popular data-mining techniques--linear discriminant analysis, neural networks, and decision tree induction-- using the representations that each technique can accommodate. We then empirically test our approach on an actual surgical data set. Tree induction gives the lowest rate of false positive predictions, and a version of discriminant analysis yields the lowest rate of false negatives for multiple category problems, but neural networks give the best overall results for the largest multiple classification cases. There is substantial room for improvement in overall performance for all techniques.


Communications of The ACM | 2003

Using data mining to profile TV viewers

William E. Spangler; Mordechai Gal-Or; Jerrold H. May

Mining thousands of viewing choices and millions of patterns, advertisers and TV networks identify household characteristics, tastes, and desires to create and deliver custom targeted advertising.


Management Science | 2006

Targeted Advertising Strategies on Television

Esther Gal-Or; Mordechai Gal-Or; Jerrold H. May; William E. Spangler

The personal video recorder (PVR) facilitates the use of targeted advertising by allowing companies to monitor television viewing behavior and to build demographic profiles of viewers from the data that are collected. Our research explores the extent to which an advertiser should allocate resources to increase the quality of its targeting. We present a game-theoretic model that extends the conventional measurement of targeting quality by exploring the trade-off between two measures: accuracy and recognition. Accuracy measures the likelihood that any target segment prediction is correct, while recognition conversely measures the likelihood that any member of the target segment is identified. We find that the relative resources allocated to improving accuracy and recognition depend upon the size of the population of viewers, the propensity of viewers to skip commercials, the overall cost of airing commercials, and the competitive environment. Furthermore, the incentives to improve accuracy are markedly different from those to improve recognition. Although improving accuracy does not affect the extent of price competition, improving recognition leads to intensified price competition and reduced profitability in the product market. Thus, when facing a competitor that pursues a strategy to improve its recognition of potential customers, an advertiser should choose to reduce its investment in recognition and increase its investment in accuracy.


IEEE Transactions on Knowledge and Data Engineering | 1991

The role of artificial intelligence in understanding the strategic decision-making process

William E. Spangler

A survey is given which focuses on research issues involving strategic planning and artificial intelligence (AI), including the nature of the formulation task, cognitive studies of strategic planners and computer-based support for strategic planning. The author reviews the research to date, and argues that, like traditional decision support systems (DSS) research, much of the potential for future research in this area lies in modeling the ill-structured, early stages of the strategic decision making process namely, the strategic intelligence analysis and the issue diagnosis. Therefore, he discusses a specific model-based approach to the study of these early stages. Research in artificial intelligence-including investigations into diagnosis and situation assessment, analogical reasoning, plan recognition, nonmonotonic reasoning and distributed intelligence, among others-can be used to build models of strategic decision making that help researchers in better understanding this traditionally unstructured activity. >


Communications of The ACM | 2006

Exploring the privacy implications of addressable advertising and viewer profiling

William E. Spangler; Kathleen S. Hartzel; Mordechai Gal-Or

Collecting consumer viewing habits will come back to bite advertisers who do not understand or appreciate how consumers feel about privacy infringement.


Information Fusion | 2005

Assessing the predictive accuracy of diversity measures with domain-dependent, asymmetric misclassification costs

Mordechai Gal-Or; Jerrold H. May; William E. Spangler

We explore the relationship between diversity measures and ensemble performance, for binary classification with simple majority voting, within a problem domain characterized by asymmetric misclassification costs. Extending the work of Kuncheva and Whitaker [Machine Learning 51(2) (2003) 181], we compare a set of diversity measures within two different data representations. The first is a direct representation, which explicitly allows for consideration of asymmetric costs by indicating the specific values of the predictions––which in turn allows for a distinction between more costly misclassifications in this domain (i.e., actual 0 predicted as 1) and less costly ones (i.e., actual 1 predicted as 0). The second is an oracle representation, which indicates predictions as either correct or incorrect, and therefore does not allow for asymmetric costs. Within these representations we identified and manipulated certain situational factors, including the percentage of target group members in the population and the designed accuracy and sensitivity of each constituent model. Based on a neural network comparison of diversity measures and ensemble performance, we found that (1) diversity measure association with ensemble performance is contingent on the data representation, with Yule’s Qstatistic and the coincident failure measure (CFD) as the best indicators in the direct representation and CFD alone as best indicator in the oracle representation, and (2) diversity measure association with ensemble performance varies as situational factors are manipulated; that is, diversity measures are differentially effective at different factor levels. Thus, the choice of a diversity measure in assessing ensemble classification performance requires an examination of both the nature of the task domain and the specific factors that comprise the domain. � 2004 Elsevier B.V. All rights reserved.


Journal of Medical Systems | 2002

A Data Mining Approach to Characterizing Medical Code Usage Patterns

William E. Spangler; Jerrold H. May; David P. Strum; Luis G. Vargas

This research describes a synthetic data mining approach to identifying diagnostic (ICD-9) and procedure (CPT) code usage patterns in two U.S. hospitals, with the goal of determining the adequacy and effectiveness of the current coding classification systems. We combine relative frequency measurements with measures of industry concentration borrowed from industrial economics in order to (1) ascertain the extent to which physicians utilize the available codes in classifying patients and (2) discover the factors that impinge on code usage. Our results partition the domain into areas for which the coding systems perform well and those areas for which the systems perform relatively poorly. The goal is to use this approach to understand how coding systems are used and to highlight areas for targeted improvement of the current coding systems.


Expert Systems With Applications | 1991

Expertech: Issues in the design and development of an intelligent help desk system

Dolphy M. Abraham; William E. Spangler; Jerrold H. May

Abstract As more companies turn to expert systems to support traditional help desk activities, the need to understand the particular issues inherent in the development of such systems becomes increasingly important. This article describes the design and development of Expertech, a prototype expert system intended to assist a help desk operator in the resolution of common problems encountered by the users of a telecommunications network. Expertech, a rule-based system, is a partial implementation of a model of human expertise in network diagnosis. We argue that developers of intelligent help desk support systems (IHDSS) must consider several challenges not typically encountered in other environments. An IHDSS, for example, must consider the heterogeneous nature of the end users, and tailor its interface to encompass the varying technical knowledge of those users. Furthermore, an IHDSS must deal with the three-party nature of computer-based support, where the system does not interact directly with the end user. Instead, communication between the IHDSS and the end user takes place indirectly, through a human intermediary, the help desk operator. In the context of these considerations, we describe the approach we took in developing Expertech.

Collaboration


Dive into the William E. Spangler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis G. Vargas

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ann B. Pushkin

West Virginia University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge