Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hakan Altınçay is active.

Publication


Featured researches published by Hakan Altınçay.


Speech Communication | 2000

An information theoretic framework for weight estimation in the combination of probabilistic classifiers for speaker identification

Hakan Altınçay; Mübeccel Demirekler

In this paper, we describe a relation between classification systems and information transmission systems. By looking at the classification systems from this perspective, we propose a method of classifier weight estimation for the linear (LIN-OP) and logarithmic opinion pool (LOG-OP) type classifier combination schemes for which some tools from information theory are used. These weights provide contextual information about the classifiers such as class dependent classifier reliability and global classifier reliability. A measure for decision consensus among the classifiers is also proposed which is formulated as a multiplicative part of the classifier weights. A method of selecting the classifiers which provide complementary information for the combination operation is given. Using the proposed method, two classifiers are selected to be used in the combination operation. Simulation experiments in closed set speaker identification have shown that the method of weight estimation described in this paper improved the identification rates of both linear and logarithmic opinion type combination schemes. A comparison between the proposed method and some other methods of weight selection is also given at the end of the paper.


Pattern Recognition Letters | 2010

Analytical evaluation of term weighting schemes for text categorization

Hakan Altınçay; Zafer Erenel

An analytical evaluation of six widely used term weighting techniques for text categorization is presented. The analysis depends on expressing the term weights using term occurrence probabilities in positive and negative categories. The weighting behaviors of the schemes considered are firstly clarified by analyzing the relation between the occurrence probabilities of terms which receive equal weights. Then, the weights are expressed in terms of ratio and difference of term occurrence probabilities where the similarities and differences among different schemes are revealed. Simulations show that the relative performance of different schemes can be explained by the ways they use ratio and difference of term occurrence probabilities in generating the term weights.


Applied Soft Computing | 2007

Ensembling evidential k-nearest neighbor classifiers through multi-modal perturbation

Hakan Altınçay

Ensembling techniques have already been considered for improving the accuracy of k-nearest neighbor classifier. It is shown that using different feature subspaces for each member classifier, strong ensembles can be generated. Although it has a more flexible structure which is an obvious advantage from diversity point of view and is observed to provide better classification accuracies compared to voting based k-NN classifier, ensembling evidential k-NN classifier which is based on Dempster-Shafer theory of evidence is not yet fully studied. In this paper, we firstly investigate improving the performance of evidential k-NN classifier using random subspace method. Taking into account its potential to be perturbed also in parameter dimension due to its class and classifier dependent parameters, we propose ensembling evidential k-NN through multi-modal perturbation using genetic algorithms. Experimental results have shown that the improved accuracies obtained using random subspace method can be further surpassed through multi-modal perturbation.


Applied Intelligence | 2006

On the independence requirement in Dempster-Shafer theory for combining classifiers providing statistical evidence

Hakan Altınçay

In classifier combination, the relative values of beliefs assigned to different hypotheses are more important than accurate estimation of the combined belief function representing the joint observation space. Because of this, the independence requirement in Dempster’s rule should be examined from classifier combination point of view. In this study, it is investigated whether there is a set of dependent classifiers which provides a better combined accuracy than independent classifiers when Dempster’s rule of combination is used. The analysis carried out for three different representations of statistical evidence has shown that the combination of dependent classifiers using Dempster’s rule may provide much better combined accuracies compared to independent classifiers.


Speech Communication | 2003

Speaker identification by combining multiple classifiers using Dempster–Shafer theory of evidence

Hakan Altınçay; Mübeccel Demirekler

Abstract This paper presents a multiple classifier approach as an alternative solution to the closed-set text-independent speaker identification problem. The proposed algorithm which is based on Dempster–Shafer theory of evidence computes the first and Rth level ranking statistics. Rth level confusion matrices extracted from these ranking statistics are used to cluster the speakers into model sets where they share set specific properties. Some of these model sets are used to reflect the strengths and weaknesses of the classifiers while some others carry speaker dependent ranking statistics of the corresponding classifier. These information sets from multiple classifiers are combined to arrive at a joint decision. For the combination task, a rule-based algorithm is developed where Dempster’s rule of combination is applied in the final step. Experimental results have shown that the proposed method performed much better compared to some other rank-based combination methods.


international conference on pattern recognition | 2002

Why does output normalization create problems in multiple classifier systems

Hakan Altınçay; Mübeccel Demirekler

A combination of classifiers is a promising direction for obtaining better classification systems. However the outputs of different classifiers may have different scales and hence the classifier outputs are incomparable. Incomparability of the classifier output scores is a major problem in the combination of different classification systems. In order to avoid this problem, the measurement level classifier outputs are generally normalized. However recent studies have proven that output normalization may provide some problems. For instance, the multiple classifier systems performance may become worse than that of a single individual classifier. This paper presents some interesting observations about the reason why such undesirable behavior occurs.


Pattern Recognition | 2002

Plurality voting-based multiple classifier systems: statistically independent with respect to dependent classifier sets

Mübeccel Demirekler; Hakan Altınçay

Abstract The simultaneous use of multiple classifiers has been shown to provide performance improvement in classification problems. The selection of an optimal set of classifiers is an important part of multiple classifier systems and the independence of classifier outputs is generally considered to be an advantage for obtaining better multiple classifier systems. In this paper, the need for the classifier independence is interrogated from classification performance point of view. The performance achieved with the use of classifiers having independent joint distributions is compared to some other classifiers which are defined to have best and worst joint distributions. These distributions are obtained by formulating the combination operation as an optimization problem. The analysis revealed several important observations about classifier selection which are then used to analyze the problem of selecting an additional classifier to be used with the available multiple classifier system.


Pattern Recognition Letters | 2003

Undesirable effects of output normalization in multiple classifier systems

Hakan Altınçay; Mübeccel Demirekler

Incomparability of the classifier output scores is a major problem in the combination of different classification systems. In order to deal with this problem, the measurement level classifier outputs are generally normalized. However, empirical results have shown that output normalization may lead to some undesirable effects. This paper presents analyses for some most frequently used normalization methods and it is shown that the main reason for these undesirable effects of output normalization is the dimensionality reduction in the output space. An artificial classifier combination example and a real-data experiment are provided where these effects are further clarified.


Pattern Recognition Letters | 2005

On naive Bayesian fusion of dependent classifiers

Hakan Altınçay

In classifier combination, the relative values of a posteriori probabilities assigned to different hypotheses are more important than the accuracy of their estimates. Because of this, the independence requirement in naive Bayesian fusion should be examined from combined accuracy point of view. In this study, it is investigated whether there is a set of dependent classifiers which provides a better combined accuracy than independent classifiers when naive Bayesian fusion is used. For this purpose, two classes and three classifiers case is initially considered where the pattern classes are not equally probable. Taking into account the increased complexity in formulations, equal a priori probabilities are considered in the general case where N classes and K classifiers are used. The analysis carried out has shown that the combination of dependent classifiers using naive Bayesian fusion may provide much better combined accuracies compared to independent classifiers.


Pattern Recognition | 2007

Decision trees using model ensemble-based nodes

Hakan Altınçay

Decision trees recursively partition the instance space by generating nodes that implement a decision function belonging to an a priori specified model class. Each decision may be univariate, linear or nonlinear. Alternatively, in omnivariate decision trees, one of the model types is dynamically selected by taking into account the complexity of the problem defined by the samples reaching that node. The selection is based on statistical tests where the most appropriate model type is selected as the one providing significantly better accuracy than others. In this study, we propose the use of model ensemble-based nodes where a multitude of models are considered for making decisions at each node. The ensemble members are generated by perturbing the model parameters and input attributes. Experiments conducted on several datasets and three model types indicate that the proposed approach achieves better classification accuracies compared to individual nodes, even in cases when only one model class is used in generating ensemble members.

Collaboration


Dive into the Hakan Altınçay's collaboration.

Top Co-Authors

Avatar

Mübeccel Demirekler

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Ekrem Varoglu

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Zafer Erenel

European University of Lefka

View shared research outputs
Top Co-Authors

Avatar

Cem Ergün

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Dima Badawi

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Ghazaal Sheikhi

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Nazife Dimililer

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Önsen Toygar

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Adnan Acan

Eastern Mediterranean University

View shared research outputs
Top Co-Authors

Avatar

Ahmet Ünveren

Eastern Mediterranean University

View shared research outputs
Researchain Logo
Decentralizing Knowledge