Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William A. Young is active.

Publication


Featured researches published by William A. Young.


Theoretical Issues in Ergonomics Science | 2011

A survey of methodologies for the treatment of missing values within datasets: limitations and benefits

William A. Young; Gary R. Weckman; William S. Holland

Knowledge discovery in ergonomics is complicated by the presence of missing data, because most methodologies do not tolerate incomplete sample instances. Data-miners cannot always remove sample instances when they occur. Imputation methods are needed to ‘fill in’ estimated values for the missing instances in order to construct a complete dataset. Even with emerging methodologies, the ergonomics field seems to rely on outdated imputation techniques. This survey presents an overview of a variety of imputation methods found in current academic research, which is not limited to ergonomic studies. The objective is to strengthen the communities’ understanding of imputation methodologies and briefly highlight their benefits and limitations. This survey suggests that the multiple imputation method is the current state-of-the-art missing value technique. This method has proven to be robust to many of the shortcomings that plague other methods and should be considered the primary choice for missing value problems found in ergonomic studies.


Neural Computing and Applications | 2012

Applying a hybrid artificial immune systems to the job shop scheduling problem

Gary R. Weckman; Akshata A. Bondal; Magda M. Rinder; William A. Young

In today’s economy, manufacturing sectors are challenged by high costs, low revenues. As part of the managerial activities, scheduling plays an important role in optimizing cost, revenue, profit, time, and efficiency by optimization of available resources. The objective of this research is to evaluate the existing artificial immune system (AIS) principles, models, and applications, and to develop an algorithm applicable to job shop scheduling problems. The developed algorithm was based on the theories of the positive selection algorithm and the clonal selection principle. To test the algorithm, ten job shop scheduling problems were evaluated using the new AIS model. To validate the results, the same job scheduling problems were evaluated using a genetic algorithm (GA) model. The results of the two evaluations were compared against each other using the dimensions of optimality and robustness. The testing revealed that the AIS model was slightly less competitive than the GA model in the optimality test but beat the GA in robustness. Another key finding was that the robustness of the model increased as the best solutions produced by the model were closer to the known optimal.


International Journal of Production Research | 2016

A robust optimisation model for production planning and pricing under demand uncertainty

Ehsan Ardjmand; Gary R. Weckman; William A. Young; Omid Sanei Bajgiran; Bizhan Aminipour

The profitability of every manufacturing plant is dependent on its pricing strategy and a production plan to support the customers’ demand. In this paper, a new robust multi-product and multi-period model for planning and pricing is proposed. The demand is considered to be uncertain and price-dependent. Thus, for each price, a range of demands is possible. The unsatisfied demand is considered to be lost and hence, no backlogging is allowed. The objective is to maximise the profit over the planning horizon, which consists of a finite number of periods. To solve the proposed model, a modified unconscious search (US) algorithm is introduced. Several artificial test problems along with a real case implementation of the model in a textile manufacturing plant are used to show the applicability of the model and effectiveness of the US for tackling this problem. The results show that the proposed model can improve the profitability of the plant and the US is able to find high quality solutions in a very short time compared to exact methods.


International journal of business | 2016

Defining, Understanding, and Addressing Big Data

Trevor J. Bihl; William A. Young; Gary R. Weckman

“Big Data” is an emerging term used with business, engineering, and other domains. Although Big Data is a popular term used today, it is not a new concept. However, the means in which data can be collected is more readily available than ever, which makes Big Data more relevant than ever because it can be used to improve decisions and insights within the domains it is used. The term Big Data can be loosely defined as data that is too large for traditional analysis methods and techniques. In this article, varieties of prominent but loose definitions for Big Data are shared. In addition, a comprehensive overview of issues related to Big Data is summarized. For example, this paper examines the forms, locations, methods of analyzing and exploiting Big Data, and current research on Big Data. Big Data also concerns a myriad of tangential issues, from privacy to analysis methods that will also be overviewed. Best practices will further be considered. Additionally, the epistemology of Big Data and its history will be examined, as well as technical and societal problems existing with Big Data.


Neural Computing and Applications | 2010

Using a heuristic approach to derive a grey-box model through an artificial neural network knowledge extraction technique

William A. Young; Gary R. Weckman

Artificial neural networks (ANNs) are primarily used in academia for their ability to model complex nonlinear systems. Though ANNs have been used to solve practical problems in industry, they are not typically used in nonacademic environments because they are not very well understood, complicated to implement, or have the reputation of being a “black-box” model. Few mathematical models exist that outperform ANNs. If a highly accurate model can be constructed, the knowledge should be used to understand and explain relationships in a system. Output surfaces can be analyzed in order to gain additional knowledge about a system being modeled. This paper presents a systematic approach to derive a “grey-box” model from the knowledge obtained from the ANN. A database for an automobile’s gas mileage performance is used as a case study for the proposed methodology. The results show a greater ability to generalize system behavior than other benchmarked methods.


Environmental Monitoring and Assessment | 2017

Water demand forecasting: review of soft computing methods

Iman Ghalehkhondabi; Ehsan Ardjmand; William A. Young; Gary R. Weckman

Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.


Neural Computing and Applications | 2012

Using artificial neural networks to enhance CART

William A. Young; Gary R. Weckman; Vijaya Hari; Harry S. Whiting; Andrew P. Snow

Accuracy is a critical factor in predictive modeling. A predictive model such as a decision tree must be accurate to draw conclusions about the system being modeled. This research aims at analyzing and improving the performance of classification and regression trees (CART), a decision tree algorithm, by evaluating and deriving a new methodology based on the performance of real-world data sets that were studied. This paper introduces a new approach to tree induction to improve the efficiency of the CART algorithm by combining the existing functionality of CART with the addition of artificial neural networks (ANNs). Trained ANNs are utilized by the tree induction algorithm by generating new, synthetic data, which have been shown to improve the overall accuracy of the decision tree model when actual training samples are limited. In this paper, traditional decision trees developed by the standard CART methodology are compared with the enhanced decision trees that utilize the ANN’s synthetic data generation, or CART+. This research demonstrates the improved accuracies that can be obtained with CART+, which can ultimately improve the knowledge that can be extracted by researchers about a system being modeled.


Journal of Healthcare Engineering | 2012

Healthcare Scheduling by Data Mining: Literature Review and Future Directions

Maria M. Rinder; Gary R. Weckman; Diana J. Schwerha; Andy Snow; Peter A. Dreher; Namkyu Park; Helmut W. Paschold; William A. Young; J. Warren

This article presents a systematic literature review of the application of industrial engineering methods in healthcare scheduling, with a focus on the role of patient behavior in scheduling. Nine articles that used mathematical programming, data mining, genetic algorithms, and local searches for optimum schedules were obtained from an extensive search of literature. These methods are new approaches to solve the problems in healthcare scheduling. Some are adapted from areas such as manufacturing and transportation. Key findings from these studies include reduced time for scheduling, capability of solving more complex problems, and incorporation of more variables and constraints simultaneously than traditional scheduling methods. However, none of these methods modeled no-show and walk-ins patient behavior. Future research should include more variables related to patient and/or environment.


Neural Computing and Applications | 2015

Using Voronoi diagrams to improve classification performances when modeling imbalanced datasets

William A. Young; Scott Nykl; Gary R. Weckman; David M. Chelberg

An over-sampling technique called V-synth is proposed and compared to borderline SMOTE (bSMOTE), a common methodology used to balance an imbalanced dataset for classification purposes. V-synth is a machine learning methodology that allows synthetic minority points to be generated based on the properties of a Voronoi diagram. A Voronoi diagram is a collection of geometric regions that encapsulate classifying points in such a way that any point within the region is closest to the encapsulated classifier than any other adjacent classifiers based on their distance from one another. Because of properties inherent to Voronoi diagrams, V-synth identifies exclusive regions of feature space where it is ideal to create synthetic minority samples. To test the generalization and application of V-synth, six databases from various problem domains were selected from the University of California Irvine’s Machine Learning Repository. Though not always guaranteed due to the random nature of synthetic over-sampling, significant evidence is presented that supports the hypothesis that V-synth more consistently leads to the creation of more accurate and better-balanced classification models than bSMOTE when the classification complexity of a dataset is high.


International Journal of Data Analysis Techniques and Strategies | 2011

An investigation of TREPAN utilising a continuous oracle model

William A. Young; Gary R. Weckman; Maimuna H. Rangwala; Harry S. Whiting; Helmut W. Paschold; Andrew H. Snow; Chad Mourning

TREPAN is decision tree algorithm that utilises artificial neural networks (ANNs) in order to improve partitioning conditions when sample data is sparse. When sample sizes are limited during the tree-induction process, TREPAN relies on an ANN oracle in order to create artificial sample instances. The original TREPAN implementation was limited to ANNs that were designed to be classification models. In other words, TREPAN was incapable of building decision trees from ANN models that were continuous in nature. Thus, the objective of this research was to modify the original implementation of TREPAN in order to develop and test decision trees derived from continuous-based ANN models. Though the modification were minor, they are significant because it provides researchers and practitioners an additional strategy to extract knowledge from a trained ANN regardless of its design. This research also explores how TEPAN|s adjustable settings influence predictive performances based on a dataset|s complexity and size.

Collaboration


Dive into the William A. Young's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary L. Fahnenstiel

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor J. Bihl

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge