Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michiaki Taniguchi is active.

Publication


Featured researches published by Michiaki Taniguchi.


Neural Computation | 1997

Averaging regularized estimators

Michiaki Taniguchi; Volker Tresp

We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvementif compared to the individual estimatorsis achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.


WIT Transactions on Information and Communication Technologies | 2000

Input Dependent Misclassification Costs ForCost-sensitive Classifiers

Jaakko Hollmén; Michal Skubacz; Michiaki Taniguchi

In data mining and in classification specifically, cost issues have been undervalued for a long time, although they are of crucial importance in real-world applications. Recently, however, cost issues have received growing attention, see for example [1,2,3]. Cost-sensitive classifiers are usually based on the assumption of constant misclassification costs between given classes, that is, the cost incurred when an object of class j is erroneously classified as belonging to class i. In many domains, the same type of error may have differing costs due to particular characteristics of objects to be classified. For example, loss caused by misclassifying credit card abuse as normal usage is dependent on the amount of uncollectible credit involved. In this paper, we extend the concept of misclassification costs to include the influence of the input data to be classified. Instead of a fixed misclassification cost matrix, we now have a misclassification cost matrix of functions, separately evaluated for each object to be classified. We formulate the conditional risk for this new approach and relate it to the fixed misclassification cost case. As an illustration, experiments in the telecommunications fraud domain are used, where the costs are naturally data-dependent due to the connection-based nature of telephone tariffs. Posterior probabilities from a hidden Markov model are used in classification, although the described cost model is applicable with other methods such as neural networks or probabilistic networks.


international conference on artificial neural networks | 1997

Combining Regularized Neural Networks

Michiaki Taniguchi; Volker Tresp

In this paper we show that the improvement in performance which can be achieved by averaging depends critically on the degree of regularization which is used in training the individual neural networks. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting and variance-based bagging. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.


Archive | 2002

Customer Insight by Exploring a Customer Behavior Model on Top of a Nucleus Database

Michiaki Taniguchi; Michael Haft; Reimar Hofmann

Customer Relationship Management has become mandatory for large and medium scale enterprises. Many companies have already made significant investments to collect data in a customer centric way such that all information regarding a customer is accessible in one place independent of channel (visit, telephone, web,... ), action (buy, complaint, information gathering,... ), department or region. This customer centric organization of data is mostly exploited in sales and services where it allows the agent to quickly obtain all information regarding a customer,for example when the customer calls


neural information processing systems | 1994

Combining Estimators Using Non-Constant Weighting Functions

Volker Tresp; Michiaki Taniguchi


international conference on acoustics speech and signal processing | 1998

Fraud detection in communication networks using neural and probabilistic methods

Michiaki Taniguchi; Michael Haft; Jaakko Hollmén; Volker Tresp


Archive | 1999

Marketing and controlling networks by using neurocomputing methods in network management data

Bernhard Nauer; Michiaki Taniguchi


Archive | 1998

Identification of a fraudulent call with a neural network

Michael Haft; Jaakko Hollmen; Michiaki Taniguchi; Volker Tresp


Archive | 1997

Optimizing the band width at the band ends on a mill train

Einar Bröse; Michiaki Taniguchi; Thomas Martinetz; Günter Sörgel; Otto Gramckow


Archive | 1998

Method for combining output signals of several estimators, in particular of at least one neural network, into a results signal determined by a global estimator

Michiaki Taniguchi; Volker Tresp

Collaboration


Dive into the Michiaki Taniguchi's collaboration.

Researchain Logo
Decentralizing Knowledge