Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tony R. Martinez is active.

Publication


Featured researches published by Tony R. Martinez.


Journal of Artificial Intelligence Research | 1997

Improved heterogeneous distance functions

D. Randall Wilson; Tony R. Martinez

Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.


Machine Learning | 2000

Reduction Techniques for Instance-BasedLearning Algorithms

D. Randall Wilson; Tony R. Martinez

Instance-based learning algorithms are often faced with the problem of deciding which instances to store for use during generalization. Storing too many instances can result in large memory requirements and slow execution speed, and can cause an oversensitivity to noise. This paper has two main purposes. First, it provides a survey of existing algorithms used to reduce storage requirements in instance-based learning algorithms and other exemplar-based algorithms. Second, it proposes six additional reduction algorithms called DROP1–DROP5 and DEL (three of which were first described in Wilson & Martinez, 1997c, as RT1–RT3) that can be used to remove instances from the concept description. These algorithms and 10 algorithms from the survey are compared on 31 classification tasks. Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise.


Information Sciences | 2000

Quantum associative memory

Dan Ventura; Tony R. Martinez

Abstract This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation (QC) uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory (QuAM) with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary high-level quantum mechanical and quantum computational ideas and introduces a QuAM. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future.


computational intelligence | 2000

AN INTEGRATED INSTANCE-BASED LEARNING ALGORITHM

D. Randall Wilson; Tony R. Martinez

The basic nearest‐neighbor rule generalizes well in many domains but has several shortcomings, including inappropriate distance functions, large storage requirements, slow execution time, sensitivity to noise, and an inability to adjust its decision boundaries after storing the training data. This paper proposes methods for overcoming each of these weaknesses and combines the methods into a comprehensive learning system called the Integrated Decremental Instance‐Based Learning Algorithm (IDIBL) that seeks to reduce storage, improve execution speed, and increase generalization accuracy, when compared to the basic nearest neighbor algorithm and other learning models. IDIBL tunes its own parameters using a new measure of fitness that combines confidence and cross‐validation accuracy in order to avoid discretization problems with more traditional leave‐one‐out cross‐validation. In our experiments IDIBL achieves higher generalization accuracy than other less comprehensive instance‐based learning algorithms, while requiring less than one‐fourth the storage of the nearest neighbor algorithm and improving execution speed by a corresponding factor. In experiments on twenty‐one data sets, IDIBL also achieves higher generalization accuracy than that reported for sixteen major machine learning and neural network models.


international conference on machine learning and applications | 2008

Decision Tree Ensemble: Small Heterogeneous Is Better Than Large Homogeneous

Michael Gashler; Christophe G. Giraud-Carrier; Tony R. Martinez

Using decision trees that split on randomly selected attributes is one way to increase the diversity within an ensemble of decision trees. Another approach increases diversity by combining multiple tree algorithms. The random forest approach has become popular because it is simple and yields good results with common datasets. We present a technique that combines heterogeneous tree algorithms and contrast it with homogeneous forest algorithms. Our results indicate that random forests do poorly when faced with irrelevant attributes, while our heterogeneous technique handles them robustly. Further, we show that large ensembles of random trees are more susceptible to diminishing returns than our technique. We are able to obtain better results across a large number of common datasets with a significantly smaller ensemble.


international symposium on neural networks | 2011

Improving classification accuracy by identifying and removing instances that should be misclassified

Michael R. Smith; Tony R. Martinez

Appropriately handling noise and outliers is an important issue in data mining. In this paper we examine how noise and outliers are handled by learning algorithms. We introduce a filtering method called PRISM that identifies and removes instances that should be misclassified. We refer to the set of removed instances as ISMs (instances that should be misclassified). We examine PRISM and compare it against 3 existing outlier detection methods and 1 noise reduction technique on 48 data sets using 9 learning algorithms. Using PRISM, the classification accuracy increases from 78.5% to 79.8% on a set of 53 data sets and is statistically significant. In addition, the accuracy on the non-outlier instances increases from 82.8% to 84.7%. PRISM achieves a higher classification accuracy than the outlier detection methods and compares favorably with the noise reduction method.


Machine Learning | 2014

An instance level analysis of data complexity

Michael R. Smith; Tony R. Martinez; Christophe G. Giraud-Carrier

Most data complexity studies have focused on characterizing the complexity of the entire data set and do not provide information about individual instances. Knowing which instances are misclassified and understanding why they are misclassified and how they contribute to data set complexity can improve the learning process and could guide the future development of learning algorithms and data analysis methods. The goal of this paper is to better understand the data used in machine learning problems by identifying and analyzing the instances that are frequently misclassified by learning algorithms that have shown utility to date and are commonly used in practice. We identify instances that are hard to classify correctly (instance hardness) by classifying over 190,000 instances from 64 data sets with 9 learning algorithms. We then use a set of hardness measures to understand why some instances are harder to classify correctly than others. We find that class overlap is a principal contributor to instance hardness. We seek to integrate this information into the training process to alleviate the effects of class overlap and present ways that instance hardness can be used to improve learning.


Foundations of Physics Letters | 2014

Initializing the Amplitude Distribution of a Quantum State

Dan Ventura; Tony R. Martinez

To date, quantum computational algorithms have operated on a superposition of all basis states of a quantum system. Typically, this is because it is assumed that some function f is known and implementable as a unitary evolution. However, what if only some points of the function f are known? It then becomes important to be able to encode only the knowledge that we have about f. This paper presents an algorithm that requires a polynomial number of elementary operations for initializing a quantum system to represent only the m known points of a function f.


international symposium on neural networks | 1999

Cross validation and MLP architecture selection

Timothy L. Andersen; Tony R. Martinez

The performance of cross validation (CV) based MLP architecture selection is examined using 14 real world problem domains. When testing many different network architectures the results show that CV is only slightly more likely than random to select the optimal network architecture, and that the strategy of using the simplest available network architecture performs better than CV in this case. Experimental evidence suggests several reasons for the poor performance of CV. In addition, three general strategies which lead to significant increase in the performance of CV are proposed. While this paper focuses on using CV to select the optimal MLP architecture, the strategies are also applicable when CV is used to select between several different learning models, whether the models are neural networks, decision trees, or other types of learning algorithms. When using these strategies the average generalization performance of the network architecture which CV selects is significantly better than the performance of several other well known machine learning algorithms on the data sets tested.


Archive | 1998

An Artificial Neuron with Quantum Mechanical Properties

Dan Ventura; Tony R. Martinez

Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. Choosing the best weights for a neural network is a time consuming problem that makes the harnessing of this ‘quantum parallelism’ appealing. This paper briefly covers necessary high-level quantum theory and introduces a model for a quantum neuron.

Collaboration


Dive into the Tony R. Martinez's collaboration.

Top Co-Authors

Avatar

Dan Ventura

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xinchuan Zeng

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

D.R. Wilson

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Menke

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge