D. Randall Wilson
Brigham Young University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by D. Randall Wilson.
Journal of Artificial Intelligence Research | 1997
D. Randall Wilson; Tony R. Martinez
Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.
Machine Learning | 2000
D. Randall Wilson; Tony R. Martinez
Instance-based learning algorithms are often faced with the problem of deciding which instances to store for use during generalization. Storing too many instances can result in large memory requirements and slow execution speed, and can cause an oversensitivity to noise. This paper has two main purposes. First, it provides a survey of existing algorithms used to reduce storage requirements in instance-based learning algorithms and other exemplar-based algorithms. Second, it proposes six additional reduction algorithms called DROP1–DROP5 and DEL (three of which were first described in Wilson & Martinez, 1997c, as RT1–RT3) that can be used to remove instances from the concept description. These algorithms and 10 algorithms from the survey are compared on 31 classification tasks. Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise.
computational intelligence | 2000
D. Randall Wilson; Tony R. Martinez
The basic nearest‐neighbor rule generalizes well in many domains but has several shortcomings, including inappropriate distance functions, large storage requirements, slow execution time, sensitivity to noise, and an inability to adjust its decision boundaries after storing the training data. This paper proposes methods for overcoming each of these weaknesses and combines the methods into a comprehensive learning system called the Integrated Decremental Instance‐Based Learning Algorithm (IDIBL) that seeks to reduce storage, improve execution speed, and increase generalization accuracy, when compared to the basic nearest neighbor algorithm and other learning models. IDIBL tunes its own parameters using a new measure of fitness that combines confidence and cross‐validation accuracy in order to avoid discretization problems with more traditional leave‐one‐out cross‐validation. In our experiments IDIBL achieves higher generalization accuracy than other less comprehensive instance‐based learning algorithms, while requiring less than one‐fourth the storage of the nearest neighbor algorithm and improving execution speed by a corresponding factor. In experiments on twenty‐one data sets, IDIBL also achieves higher generalization accuracy than that reported for sixteen major machine learning and neural network models.
International Journal on Artificial Intelligence Tools | 2005
Jiang Li; Michael T. Manry; Changhua Yu; D. Randall Wilson
Algorithms reducing the storage requirement of the nearest neighbor classifier (NNC) can be divided into three main categories: Fast searching algorithms, Instance-based learning algorithms and Prototype based algorithms. We propose an algorithm, LVQPRU, for pruning NNC prototype vectors and a compact classifier with good performance is obtained. The basic condensing algorithm is applied to the initial prototypes to speed up the learning process. The learning vector quantization (LVQ) algorithm is utilized to fine tune the remaining prototypes during each pruning iteration. We evaluate LVQPRU on several data sets along with 12 other algorithms using ten-fold cross-validation. Simulation results show that the proposed algorithm has high generalization accuracy and good storage reduction ratios.
international conference on machine learning | 1997
D. Randall Wilson; Tony R. Martinez
Neural Networks | 2003
D. Randall Wilson; Tony R. Martinez
Archive | 1998
Tony R. Martinez; R. Brian Moncur; D. Lynn Shepherd; Randall J. Parr; D. Randall Wilson; Carl Hal Hansen
international conference on artificial intelligence | 1996
D. Randall Wilson; Tony R. Martinez
national conference on artificial intelligence | 1996
D. Randall Wilson; Tony R. Martinez
international symposium on neural networks | 2000
D. Randall Wilson; Tony R. Martinez