Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michinari Momma is active.

Publication


Featured researches published by Michinari Momma.


conference on learning theory | 2003

Sparse Kernel Partial Least Squares Regression

Michinari Momma; Kristin P. Bennett

Partial Least Squares Regression (PLS) and its kernel version (KPLS) have become competitive regression approaches. KPLS performs as well as or better than support vector regression (SVR) for moderately-sized problems with the advantages of simple implementation, less training cost, and easier tuning of parameters. Unlike SVR, KPLS requires manipulation of the full kernel matrix and the resulting regression function requires the full training data. In this paper we rigorously derive a sparse KPLS algorithm. The underlying KPLS algorithm is modified to maintain sparsity in all steps of the algorithm. The resulting ν-KPLS algorithm explicitly models centering and bias rather than using kernel centering. An e-insensitive loss function is used to produce sparse solutions in the dual space. The final regression function for the ν-KPLS algorithm only requires a relatively small set of support vectors.


algorithmic learning theory | 2003

Efficiently Learning the Metric with Side-Information

Tijl De Bie; Michinari Momma; Nello Cristianini

A crucial problem in machine learning is to choose an appropriate representation of data, in a way that emphasizes the relations we are interested in. In many cases this amounts to finding a suitable metric in the data space. In the supervised case, Linear Discriminant Analysis (LDA) can be used to find an appropriate subspace in which the data structure is apparent. Other ways to learn a suitable metric are found in [6] and [11]. However recently significant attention has been devoted to the problem of learning a metric in the semi-supervised case. In particular the work by Xing et al. [15] has demonstrated how semi-definite programming (SDP) can be used to directly learn a distance measure that satisfies constraints in the form of side-information. They obtain a significant increase in clustering performance with the new representation. The approach is very interesting, however, the computational complexity of the method severely limits its applicability to real machine learning tasks. In this paper we present an alternative solution for dealing with the problem of incorporating side-information. This side-information specifies pairs of examples belonging to the same class. The approach is based on LDA, and is solved by the efficient eigenproblem. The performance reached is very similar, but the complexity is only O(d 3) instead of O(d 6) where d is the dimensionality of the data. We also show how our method can be extended to deal with more general types of side-information.


Archive | 2006

Constructing Orthogonal Latent Features for Arbitrary Loss

Michinari Momma; Kristin P. Bennett

A boosting framework for constructing orthogonal features targeted to a given loss function is developed. Combined with techniques from spectral methods such as PCA and PLS, an orthogonal boosting algorithm for linear hypothesis is used to efficiently construct orthogonal latent features selected to optimize the given loss function. The method is generalized to construct orthogonal nonlinear features using the kernel trick. The resulting method, Boosted Latent Features (BLF) is demonstrated to both construct valuable orthogonal features and to be a competitive inference method for a variety of loss functions. For the least squares loss, BLF reduces to the PLS algorithm and preserves all the attractive properties of that algorithm. As in PCA and PLS, the resulting nonlinear features are valuable for visualization, dimensionality reduction, improving generalization by regularization, and use in other learning algorithms, but now these features can be targeted to a specific inference task/loss function. The data matrix is factorized by the extracted features. The low-rank approximation of the data matrix provides efficiency and stability in computation, an attractive characteristic of PLS-type methods. Computational results demonstrate the effectiveness of the approach on a wide range of classification and regression problems.


Sigkdd Explorations | 2003

Model Builder for Predictive Analytics & Fair Isaac's approach to KDD Cup 2003

Joel Carleton; Daragh Hartnett; Joseph P. Milana; Michinari Momma; Joseph Sirosh; Gabriela Surpi

Fair Isaac tackled the third task of KDD Cup 2003 using a predictive modeling approach that leveraged citation graphs, text mining, custom variable creation and linear regression. The core tools we used were embedded in our Model Builder for Predictive Analytics (MBPA) product that makes commercially available a broad set of previously proprietary methodologies used by Fair Isaac for predictive scoring systems such as credit risk and credit card fraud. This short paper reviews the KDD cup problem our approach, and the toolset. We analyze the predictive variables in the model, the main sources of prediction errors, and the steps that could be taken to alleviate such errors in future work.


Archive | 2006

Method and apparatus for recommendation engine using pair-wise co-occurrence consistency

Shailesh Kumar; Edmond D. Chow; Michinari Momma


siam international conference on data mining | 2002

A Pattern Search Method for Model Selection of Support Vector Regression.

Michinari Momma; Kristin P. Bennett


Archive | 2005

Method and apparatus for retail data mining using pair-wise co-occurrence consistency

Shailesh Kumar; Edmond D. Chow; Michinari Momma


knowledge discovery and data mining | 2002

MARK: a boosting algorithm for heterogeneous kernel models

Kristin P. Bennett; Michinari Momma; Mark J. Embrechts


Archive | 2006

Method and apparatus for initiating a transaction based on a bundle-lattice space of feasible product bundles

Shailesh Kumar; Edmond D. Chow; Michinari Momma


knowledge discovery and data mining | 2005

Efficient computations via scalable sparse kernel partial least squares and boosted latent features

Michinari Momma

Collaboration


Dive into the Michinari Momma's collaboration.

Top Co-Authors

Avatar

Kristin P. Bennett

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Mark J. Embrechts

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Fabio A. Arciniegas

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Muhsin Ozdemir

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Curt M. Breneman

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Larry Lockwood

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Robert H. Kewley

United States Military Academy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge