Christopher J. Merz
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher J. Merz.
Machine Learning | 1999
Christopher J. Merz
Several effective methods have been developed recently for improving predictive performance by generating and combining multiple learned models. The general approach is to create a set of learned models either by applying an algorithm repeatedly to different versions of the training data, or by applying different learning algorithms to the same data. The predictions of the models are then combined according to a voting scheme. This paper focuses on the task of combining the predictions of a set of learned models. The method described uses the strategies of stacking and Correspondence Analysis to model the relationship between the learning examples and their classification by a collection of learned models. A nearest neighbor method is then applied within the resulting representation to classify previously unseen examples. The new algorithm does not perform worse than, and frequently performs significantly better than other combining techniques on a suite of data sets.
Machine Learning | 1999
Christopher J. Merz; Michael J. Pazzani
The goal of combining the predictions of multiple learned models is to form an improved estimator. A combining strategy must be able to robustly handle the inherent correlation, or multicollinearity, of the learned models while identifying the unique contributions of each. A progression of existing approaches and their limitations with respect to these two issues are discussed. A new approach, PCR*, based on principal components regression is proposed to address these limitations. An evaluation of the new approach on a collection of domains reveals that (1) PCR* was the most robust combining method, (2) correlation could be handled without eliminating any of the learned models, and (3) the principal components of the learned models provided a continuum of “regularized” weights from which PCR* could choose.
international conference on artificial intelligence and statistics | 1996
Christopher J. Merz
Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain have met with limited success. This paper proposes a new approach to predicting a given example’s class by locating it in the “example space” and then choosing the best learner(s) in that region of the example space to make predictions. The regions of the example space are defined by the prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region. This dynamic approach to learning algorithm selection is compared to other methods for selecting from multiple learning algorithms. The approach is then extended to weight rather than select the algorithms according to their past performance in a given region. Both approaches are further evaluated on a set of ten domains and compared to several other meta-learning strategies.
international conference on machine learning | 1994
Michael J. Pazzani; Christopher J. Merz; Patrick M. Murphy; Kamal M. Ali; Timothy Hume; Clifford Brunk
Archive | 1999
Catherine Blake; Eamonn J. Keogh; Christopher J. Merz
Archive | 1998
Seth Hettich; Catherine Blake; Christopher J. Merz
Archive | 1998
Christopher J. Merz; Michael J. Pazzani
neural information processing systems | 1996
Christopher J. Merz; Michael J. Pazzani
neural information processing systems | 1997
Christopher J. Merz
international conference on machine learning | 1995
Takefumi Yamazaki; Michael J. Pazzani; Christopher J. Merz